modelId
stringlengths 4
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
392M
| likes
int64 0
6.56k
| library_name
stringclasses 368
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 51
values | createdAt
unknown | card
stringlengths 1
1M
|
---|---|---|---|---|---|---|---|---|---|
timm/hrnet_w18.ms_aug_in1k | timm | "2023-04-24T21:25:48Z" | 81,892 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1908.07919",
"license:mit",
"region:us"
] | image-classification | "2023-04-24T21:25:16Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for hrnet_w18.ms_aug_in1k
A HRNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.3
- GMACs: 4.3
- Activations (M): 16.3
- Image size: 224 x 224
- **Papers:**
- Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919
- **Original:** https://github.com/HRNet/HRNet-Image-Classification
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('hrnet_w18.ms_aug_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18.ms_aug_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w18.ms_aug_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal = {TPAMI}
year={2019}
}
```
|
microsoft/resnet-18 | microsoft | "2024-04-08T11:06:50Z" | 81,851 | 46 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-16T15:40:26Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ResNet
ResNet model trained on imagenet-1k. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) and first released in [this repository](https://github.com/KaimingHe/deep-residual-networks).
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoImageProcessor, AutoModelForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-18")
>>> model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-18")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tiger cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/resnet). |
SimianLuo/LCM_Dreamshaper_v7 | SimianLuo | "2024-03-05T08:32:22Z" | 81,645 | 383 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"en",
"arxiv:2310.04378",
"license:mit",
"diffusers:LatentConsistencyModelPipeline",
"region:us"
] | text-to-image | "2023-10-14T08:26:52Z" | ---
license: mit
language:
- en
pipeline_tag: text-to-image
tags:
- text-to-image
---
# Latent Consistency Models
Official Repository of the paper: *[Latent Consistency Models](https://arxiv.org/abs/2310.04378)*.
Project Page: https://latent-consistency-models.github.io
## Try our Hugging Face demos:
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model)
## Model Descriptions:
Distilled from [Dreamshaper v7](https://huggingface.co/Lykon/dreamshaper-7) fine-tune of [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with only 4,000 training iterations (~32 A100 GPU Hours).
## Generation Results:
<p align="center">
<img src="teaser.png">
</p>
By distilling classifier-free guidance into the model's input, LCM can generate high-quality images in very short inference time. We compare the inference time at the setting of 768 x 768 resolution, CFG scale w=8, batchsize=4, using a A800 GPU.
<p align="center">
<img src="speed_fid.png">
</p>
## Usage
You can try out Latency Consistency Models directly on:
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model)
To run the model yourself, you can leverage the 🧨 Diffusers library:
1. Install the library:
```
pip install --upgrade diffusers # make sure to use at least diffusers >= 0.22
pip install transformers accelerate
```
2. Run the model:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
```
For more information, please have a look at the official docs:
👉 https://huggingface.co/docs/diffusers/api/pipelines/latent_consistency_models#latent-consistency-models
## Usage (Deprecated)
1. Install the library:
```
pip install diffusers transformers accelerate
```
2. Run the model:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main", revision="fb9c5d")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, output_type="pil").images
```
## BibTeX
```bibtex
@misc{luo2023latent,
title={Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference},
author={Simian Luo and Yiqin Tan and Longbo Huang and Jian Li and Hang Zhao},
year={2023},
eprint={2310.04378},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
timm/mnasnet_100.rmsp_in1k | timm | "2023-04-27T21:14:03Z" | 81,590 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1807.11626",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:00:04Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mnasnet_100.rmsp_in1k
A MNasNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* A simple RmsProp based recipe without RandAugment. Using RandomErasing, mixup, dropout, standard random-resize-crop augmentation.
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.4
- GMACs: 0.3
- Activations (M): 5.5
- Image size: 224 x 224
- **Papers:**
- MnasNet: Platform-Aware Neural Architecture Search for Mobi: https://arxiv.org/abs/1807.11626
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mnasnet_100.rmsp_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mnasnet_100.rmsp_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 96, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mnasnet_100.rmsp_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{tan2019mnasnet,
title={Mnasnet: Platform-aware neural architecture search for mobile},
author={Tan, Mingxing and Chen, Bo and Pang, Ruoming and Vasudevan, Vijay and Sandler, Mark and Howard, Andrew and Le, Quoc V},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={2820--2828},
year={2019}
}
```
|
timm/repvgg_a2.rvgg_in1k | timm | "2024-02-10T23:34:53Z" | 81,584 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2101.03697",
"license:mit",
"region:us"
] | image-classification | "2023-03-22T07:19:08Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for repvgg_a2.rvgg_in1k
A RepVGG image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.2
- GMACs: 5.7
- Activations (M): 6.3
- Image size: 224 x 224
- **Papers:**
- RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/DingXiaoH/RepVGG
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repvgg_a2.rvgg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_a2.rvgg_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 1408, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_a2.rvgg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1408, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
|
EleutherAI/pythia-70m | EleutherAI | "2023-11-21T19:04:09Z" | 81,555 | 55 | gpt-neox | [
"gpt-neox",
"pytorch",
"safetensors",
"gpt_neox",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"region:us"
] | null | "2023-02-13T14:54:51Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
library_name: gpt-neox
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-70m)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.28 |
| ARC (25-shot) | 21.59 |
| HellaSwag (10-shot) | 27.29 |
| MMLU (5-shot) | 25.9 |
| TruthfulQA (0-shot) | 47.06 |
| Winogrande (5-shot) | 51.46 |
| GSM8K (5-shot) | 0.3 |
| DROP (3-shot) | 3.33 | |
timm/dm_nfnet_f0.dm_in1k | timm | "2024-02-10T23:35:53Z" | 81,356 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.06171",
"arxiv:2101.08692",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-24T00:46:50Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for dm_nfnet_f0.dm_in1k
A NFNet (Normalization Free Network) image classification model. Trained on ImageNet-1k by paper authors.
Normalization Free Networks are (pre-activation) ResNet-like models without any normalization layers. Instead of Batch Normalization or alternatives, they use Scaled Weight Standardization and specifically placed scalar gains in residual path and at non-linearities based on signal propagation analysis.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 71.5
- GMACs: 7.2
- Activations (M): 10.2
- Image size: train = 192 x 192, test = 256 x 256
- **Papers:**
- High-Performance Large-Scale Image Recognition Without Normalization: https://arxiv.org/abs/2102.06171
- Characterizing signal propagation to close the performance gap in unnormalized ResNets: https://arxiv.org/abs/2101.08692
- **Original:** https://github.com/deepmind/deepmind-research/tree/master/nfnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dm_nfnet_f0.dm_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f0.dm_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1536, 12, 12])
# torch.Size([1, 3072, 6, 6])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dm_nfnet_f0.dm_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3072, 6, 6) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
```bibtex
@inproceedings{brock2021characterizing,
author={Andrew Brock and Soham De and Samuel L. Smith},
title={Characterizing signal propagation to close the performance gap in
unnormalized ResNets},
booktitle={9th International Conference on Learning Representations, {ICLR}},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
dataprizma/whisper-large-v3-turbo | dataprizma | "2024-10-20T16:11:15Z" | 81,116 | 1 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"uz",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"model-index",
"region:us"
] | null | "2024-10-10T07:39:12Z" | ---
language:
- uz
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_1
metrics:
- wer
model-index:
- name: Whisper Large v3 Turbo - Bahriddin Muminov
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.1
type: mozilla-foundation/common_voice_16_1
config: uz
split: test
args: 'config: uz, split: test'
metrics:
- name: Wer
type: wer
value: 28.258182136033867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 Turbo - Bahriddin Muminov
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2958
- Wer: 28.2582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.429 | 0.66 | 2000 | 0.4073 | 38.0018 |
| 0.2671 | 1.32 | 4000 | 0.3378 | 31.0778 |
| 0.2511 | 1.98 | 6000 | 0.3102 | 29.2484 |
| 0.1539 | 2.64 | 8000 | 0.3022 | 30.0763 |
| 0.111 | 3.3 | 10000 | 0.2958 | 28.2582 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
stabilityai/stable-diffusion-2-base | stabilityai | "2023-07-05T16:19:03Z" | 80,990 | 341 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-23T17:41:31Z" | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion v2-base Model Card
This model card focuses on the model associated with the Stable Diffusion v2-base model, available [here](https://github.com/Stability-AI/stablediffusion).
The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`.
![image](https://github.com/Stability-AI/stablediffusion/blob/main/assets/stable-samples/txt2img/merged-0003.png?raw=true)
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-base-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt).
- Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-base#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default PNDM/PLMS scheduler, in this example we are swapping it to EulerDiscreteScheduler):
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
import torch
model_id = "stabilityai/stable-diffusion-2-base"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:
![pareto](model-variants.jpg)
Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
timm/deit_base_distilled_patch16_224.fb_in1k | timm | "2024-02-10T23:37:15Z" | 80,831 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-28T01:27:56Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for deit_base_distilled_patch16_224.fb_in1k
A DeiT image classification model. Trained on ImageNet-1k using distillation tokens by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 87.3
- GMACs: 17.7
- Activations (M): 24.0
- Image size: 224 x 224
- **Papers:**
- Training data-efficient image transformers & distillation through attention: https://arxiv.org/abs/2012.12877
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('deit_base_distilled_patch16_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'deit_base_distilled_patch16_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 198, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers & distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
THUDM/cogvlm2-llama3-chat-19B | THUDM | "2024-09-03T16:38:05Z" | 80,638 | 201 | transformers | [
"transformers",
"safetensors",
"text-generation",
"chat",
"cogvlm2",
"conversational",
"custom_code",
"en",
"arxiv:2408.16500",
"arxiv:2311.03079",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-05-16T11:51:07Z" | ---
license: other
license_name: cogvlm2
license_link: https://huggingface.co/THUDM/cogvlm2-llama3-chat-19B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
- cogvlm2
inference: false
---
# CogVLM2
<div align="center">
<img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
</div>
<p align="center">
👋 <a href="resources/WECHAT.md" target="_blank">Wechat</a> · 💡<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> · 🎈<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a> · 📑 <a href="https://arxiv.org/pdf/2408.16500" target="_blank">Paper</a>
</p>
<p align="center">
📍Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
</p>
## Model introduction
We launch a new generation of **CogVLM2** series of models and open source two models built with [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Compared with the previous generation of CogVLM open source models, the CogVLM2 series of open source models have the following improvements:
1. Significant improvements in many benchmarks such as `TextVQA`, `DocVQA`.
2. Support **8K** content length.
3. Support image resolution up to **1344 * 1344**.
4. Provide an open source model version that supports both **Chinese and English**.
You can see the details of the CogVLM2 family of open source models in the table below:
| Model name | cogvlm2-llama3-chat-19B | cogvlm2-llama3-chinese-chat-19B |
|------------------|-------------------------------------|-------------------------------------|
| Base Model | Meta-Llama-3-8B-Instruct | Meta-Llama-3-8B-Instruct |
| Language | English | Chinese, English |
| Model size | 19B | 19B |
| Task | Image understanding, dialogue model | Image understanding, dialogue model |
| Text length | 8K | 8K |
| Image resolution | 1344 * 1344 | 1344 * 1344 |
## Benchmark
Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
| Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | VCR_EASY | VCR_HARD | MMMU | MMVet | MMBench |
|----------------------------|-------------|----------|----------|----------|----------|----------|-------------|-------------|----------|----------|----------|
| CogVLM1.1 | ✅ | 7B | 69.7 | - | 68.3 | 590 | 73.9 | 34.6 | 37.3 | 52.0 | 65.8 |
| LLaVA-1.5 | ✅ | 13B | 61.3 | - | - | 337 | - | - | 37.0 | 35.4 | 67.7 |
| Mini-Gemini | ✅ | 34B | 74.1 | - | - | - | - | - | 48.0 | 59.3 | 80.6 |
| LLaVA-NeXT-LLaMA3 | ✅ | 8B | - | 78.2 | 69.5 | - | - | - | 41.7 | - | 72.1 |
| LLaVA-NeXT-110B | ✅ | 110B | - | 85.7 | 79.7 | - | - | - | 49.1 | - | 80.5 |
| InternVL-1.5 | ✅ | 20B | 80.6 | 90.9 | **83.8** | 720 | 14.7 | 2.0 | 46.8 | 55.4 | **82.3** |
| QwenVL-Plus | ❌ | - | 78.9 | 91.4 | 78.1 | 726 | - | - | 51.4 | 55.7 | 67.0 |
| Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | 63.85 | 37.8 | **59.4** | 51.7 | 63.3 |
| Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 62.73 | 28.1 | 58.5 | - | - |
| GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 52.04 | 25.8 | 56.8 | **67.7** | 75.0 |
| **CogVLM2-LLaMA3** | ✅ | 8B | 84.2 | **92.3** | 81.0 | 756 | **83.3** | **38.0** | 44.3 | 60.4 | 80.5 |
| **CogVLM2-LLaMA3-Chinese** | ✅ | 8B | **85.0** | 88.4 | 74.7 | **780** | 79.9 | 25.1 | 42.8 | 60.5 | 78.9 |
All reviews were obtained without using any external OCR tools ("pixel only").
## Quick Start
here is a simple example of how to use the model to chat with the CogVLM2 model. For More use case. Find in our [github](https://github.com/THUDM/CogVLM2)
```python
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/cogvlm2-llama3-chat-19B"
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
TORCH_TYPE = torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else torch.float16
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=TORCH_TYPE,
trust_remote_code=True,
).to(DEVICE).eval()
text_only_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
while True:
image_path = input("image path >>>>> ")
if image_path == '':
print('You did not enter image path, the following will be a plain text conversation.')
image = None
text_only_first_query = True
else:
image = Image.open(image_path).convert('RGB')
history = []
while True:
query = input("Human:")
if query == "clear":
break
if image is None:
if text_only_first_query:
query = text_only_template.format(query)
text_only_first_query = False
else:
old_prompt = ''
for _, (old_query, response) in enumerate(history):
old_prompt += old_query + " " + response + "\n"
query = old_prompt + "USER: {} ASSISTANT:".format(query)
if image is None:
input_by_model = model.build_conversation_input_ids(
tokenizer,
query=query,
history=history,
template_version='chat'
)
else:
input_by_model = model.build_conversation_input_ids(
tokenizer,
query=query,
history=history,
images=[image],
template_version='chat'
)
inputs = {
'input_ids': input_by_model['input_ids'].unsqueeze(0).to(DEVICE),
'token_type_ids': input_by_model['token_type_ids'].unsqueeze(0).to(DEVICE),
'attention_mask': input_by_model['attention_mask'].unsqueeze(0).to(DEVICE),
'images': [[input_by_model['images'][0].to(DEVICE).to(TORCH_TYPE)]] if image is not None else None,
}
gen_kwargs = {
"max_new_tokens": 2048,
"pad_token_id": 128002,
}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
response = tokenizer.decode(outputs[0])
response = response.split("<|end_of_text|>")[0]
print("\nCogVLM2:", response)
history.append((query, response))
```
## License
This model is released under the CogVLM2 [LICENSE](LICENSE). For models built with Meta Llama 3, please also adhere to the [LLAMA3_LICENSE](LLAMA3_LICENSE).
## Citation
If you find our work helpful, please consider citing the following papers
```
@misc{hong2024cogvlm2,
title={CogVLM2: Visual Language Models for Image and Video Understanding},
author={Hong, Wenyi and Wang, Weihan and Ding, Ming and Yu, Wenmeng and Lv, Qingsong and Wang, Yan and Cheng, Yean and Huang, Shiyu and Ji, Junhui and Xue, Zhao and others},
year={2024}
eprint={2408.16500},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{wang2023cogvlm,
title={CogVLM: Visual Expert for Pretrained Language Models},
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
year={2023},
eprint={2311.03079},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
EleutherAI/pythia-12b | EleutherAI | "2024-07-09T15:50:54Z" | 80,547 | 131 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-02-28T18:48:12Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight: 600">Past early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-12B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-12B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-12B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-12B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-12B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-12B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-12B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-12B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-12B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
timm/cspdarknet53.ra_in1k | timm | "2024-02-10T23:42:40Z" | 80,460 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1911.11929",
"arxiv:1804.02767",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-12T20:39:07Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for cspdarknet53.ra_in1k
A CSP-DarkNet (Cross-Stage-Partial) image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 27.6
- GMACs: 6.6
- Activations (M): 16.8
- Image size: 256 x 256
- **Papers:**
- CSPNet: A New Backbone that can Enhance Learning Capability of CNN: https://arxiv.org/abs/1911.11929
- YOLOv3: An Incremental Improvement: https://arxiv.org/abs/1804.02767
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('cspdarknet53.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cspdarknet53.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 256, 256])
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 128, 64, 64])
# torch.Size([1, 256, 32, 32])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 1024, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cspdarknet53.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Wang2019CSPNetAN,
title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN},
author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh},
journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
year={2019},
pages={1571-1580}
}
```
```bibtex
@article{Redmon2018YOLOv3AI,
title={YOLOv3: An Incremental Improvement},
author={Joseph Redmon and Ali Farhadi},
journal={ArXiv},
year={2018},
volume={abs/1804.02767}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Yntec/epiCPhotoGasm | Yntec | "2024-04-18T01:39:56Z" | 80,436 | 44 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Realism",
"Girls",
"epinikion",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-01T17:51:17Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Realism
- Girls
- epinikion
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
Original page: https://civitai.com/models/132632?modelVersionId=145885
UPDATE: Now with the 840KVAE baked in!
If you like this model, you will love this one!: https://huggingface.co/Yntec/DreamPhotoGASM
Samples and prompt:
![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/oNSlSlgKRFNDQBzsqbqJD.png)
![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/srMnx1nftgelbTTB04a9S.png)
(hyperrealist painting of a girl as genie with a sun on each shoulder ), 1940, magazine ad, iconic. by Daniel F. Gerhartz and greg rutkowski, aggressive color palette, elegant, dream, fantasy, dynamic lighting, beautiful, poster, wlop, trending on artstation, wallpaper, 4 k, award winning, digital art, very |
timm/pit_b_224.in1k | timm | "2023-04-26T00:06:43Z" | 80,298 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.16302",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T00:05:41Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for pit_b_224.in1k
A PiT (Pooling based Vision Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 73.8
- GMACs: 12.4
- Activations (M): 32.9
- Image size: 224 x 224
- **Papers:**
- Rethinking Spatial Dimensions of Vision Transformers: https://arxiv.org/abs/2103.16302
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/naver-ai/pit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('pit_b_224.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_b_224.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 256, 31, 31])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 1024, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_b_224.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{heo2021pit,
title={Rethinking Spatial Dimensions of Vision Transformers},
author={Byeongho Heo and Sangdoo Yun and Dongyoon Han and Sanghyuk Chun and Junsuk Choe and Seong Joon Oh},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2021},
}
```
|
timm/ViT-SO400M-14-SigLIP-384 | timm | "2023-10-27T16:10:34Z" | 80,201 | 70 | open_clip | [
"open_clip",
"safetensors",
"clip",
"siglip",
"zero-shot-image-classification",
"dataset:webli",
"arxiv:2303.15343",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | "2023-10-16T23:56:46Z" | ---
tags:
- clip
- siglip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- webli
---
# Model card for ViT-SO400M-14-SigLIP-384
A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/google-research/big_vision
- **Dataset:** WebLI
- **Papers:**
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
## Model Usage
### With OpenCLIP
```python
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-SO400M-14-SigLIP-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-SO400M-14-SigLIP-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
### With `timm` (for image embeddings)
```python
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_so400m_patch14_siglip_384',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
```
```bibtex
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
```
|
dmis-lab/biobert-base-cased-v1.1 | dmis-lab | "2020-10-14T07:02:59Z" | 80,119 | 16 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | Entry not found |
AP-HP/eds-pseudo-public | AP-HP | "2024-06-17T13:05:35Z" | 80,053 | 14 | edsnlp | [
"edsnlp",
"safetensors",
"medical",
"ner",
"nlp",
"pseudonymisation",
"token-classification",
"fr",
"license:bsd-3-clause",
"model-index",
"region:us"
] | token-classification | "2024-06-16T00:31:15Z" | ---
language:
- fr
pipeline_tag: token-classification
tags:
- medical
- ner
- nlp
- pseudonymisation
license: bsd-3-clause
library_name: edsnlp
model-index:
- name: AP-HP/eds-pseudo-public
results:
- task:
type: token-classification
dataset:
name: AP-HP Pseudo Test
type: private
metrics:
- type: precision
name: Token Scores / ADRESSE / Precision
value: 0.981694715087097
- type: recall
name: Token Scores / ADRESSE / Recall
value: 0.9693877551020401
- type: f1
name: Token Scores / ADRESSE / F1
value: 0.975502420419539
- type: recall
name: Token Scores / ADRESSE / Redact
value: 0.9763848396501451
- type: accuracy
name: Token Scores / ADRESSE / Redact Full
value: 0.9665697674418601
- type: precision
name: Token Scores / DATE / Precision
value: 0.9899177066870131
- type: recall
name: Token Scores / DATE / Recall
value: 0.984285249810339
- type: f1
name: Token Scores / DATE / F1
value: 0.9870934434692821
- type: recall
name: Token Scores / DATE / Redact
value: 0.9884035981359051
- type: accuracy
name: Token Scores / DATE / Redact Full
value: 0.859011627906976
- type: precision
name: Token Scores / DATE_NAISSANCE / Precision
value: 0.9753867791842471
- type: recall
name: Token Scores / DATE_NAISSANCE / Recall
value: 0.968913726859937
- type: f1
name: Token Scores / DATE_NAISSANCE / F1
value: 0.972139477834238
- type: recall
name: Token Scores / DATE_NAISSANCE / Redact
value: 0.9933636046105481
- type: accuracy
name: Token Scores / DATE_NAISSANCE / Redact Full
value: 0.9941860465116271
- type: precision
name: Token Scores / IPP / Precision
value: 0.918987341772151
- type: recall
name: Token Scores / IPP / Recall
value: 0.9075000000000001
- type: f1
name: Token Scores / IPP / F1
value: 0.9132075471698111
- type: recall
name: Token Scores / IPP / Redact
value: 0.985
- type: accuracy
name: Token Scores / IPP / Redact Full
value: 0.9927325581395341
- type: precision
name: Token Scores / MAIL / Precision
value: 0.9609144542772861
- type: recall
name: Token Scores / MAIL / Recall
value: 0.9977029096477791
- type: f1
name: Token Scores / MAIL / F1
value: 0.978963185574755
- type: recall
name: Token Scores / MAIL / Redact
value: 0.9977029096477791
- type: accuracy
name: Token Scores / MAIL / Redact Full
value: 0.9970930232558141
- type: precision
name: Token Scores / NDA / Precision
value: 0.921428571428571
- type: recall
name: Token Scores / NDA / Recall
value: 0.834951456310679
- type: f1
name: Token Scores / NDA / F1
value: 0.8760611205432931
- type: recall
name: Token Scores / NDA / Redact
value: 0.87378640776699
- type: accuracy
name: Token Scores / NDA / Redact Full
value: 0.9723837209302321
- type: precision
name: Token Scores / NOM / Precision
value: 0.9439770896724531
- type: recall
name: Token Scores / NOM / Recall
value: 0.9525013545241101
- type: f1
name: Token Scores / NOM / F1
value: 0.948220064724919
- type: recall
name: Token Scores / NOM / Redact
value: 0.981578472096803
- type: accuracy
name: Token Scores / NOM / Redact Full
value: 0.895348837209302
- type: precision
name: Token Scores / PRENOM / Precision
value: 0.9348837209302321
- type: recall
name: Token Scores / PRENOM / Recall
value: 0.9663461538461531
- type: f1
name: Token Scores / PRENOM / F1
value: 0.950354609929078
- type: recall
name: Token Scores / PRENOM / Redact
value: 0.99002849002849
- type: accuracy
name: Token Scores / PRENOM / Redact Full
value: 0.9316860465116271
- type: precision
name: Token Scores / SECU / Precision
value: 0.882838283828382
- type: recall
name: Token Scores / SECU / Recall
value: 1
- type: f1
name: Token Scores / SECU / F1
value: 0.9377738825591581
- type: recall
name: Token Scores / SECU / Redact
value: 1
- type: accuracy
name: Token Scores / SECU / Redact Full
value: 1.0
- type: precision
name: Token Scores / TEL / Precision
value: 0.9746407438715131
- type: recall
name: Token Scores / TEL / Recall
value: 0.9993932564791541
- type: f1
name: Token Scores / TEL / F1
value: 0.9868618136688491
- type: recall
name: Token Scores / TEL / Redact
value: 0.999479934124989
- type: accuracy
name: Token Scores / TEL / Redact Full
value: 0.99563953488372
- type: precision
name: Token Scores / VILLE / Precision
value: 0.96684350132626
- type: recall
name: Token Scores / VILLE / Recall
value: 0.9376205787781351
- type: f1
name: Token Scores / VILLE / F1
value: 0.9520078354554351
- type: recall
name: Token Scores / VILLE / Redact
value: 0.9511254019292601
- type: accuracy
name: Token Scores / VILLE / Redact Full
value: 0.9113372093023251
- type: precision
name: Token Scores / ZIP / Precision
value: 0.9675036927621861
- type: recall
name: Token Scores / ZIP / Recall
value: 1
- type: f1
name: Token Scores / ZIP / F1
value: 0.983483483483483
- type: recall
name: Token Scores / ZIP / Redact
value: 1
- type: accuracy
name: Token Scores / ZIP / Redact Full
value: 1.0
- type: precision
name: Token Scores / micro / Precision
value: 0.970393736698084
- type: recall
name: Token Scores / micro / Recall
value: 0.9783320880510371
- type: f1
name: Token Scores / micro / F1
value: 0.9743467434960551
- type: recall
name: Token Scores / micro / Redact
value: 0.9884667701208881
- type: accuracy
name: Token Scores / micro / Redact Full
value: 0.6308139534883721
extra_gated_fields:
Organisation: text
Intended use of the model:
type: select
options:
- NLP Research
- Education
- Commercial Product
- Clinical Data Warehouse
- label: Other
value: other
---
<div>
[<img style="display: inline" src="https://img.shields.io/github/actions/workflow/status/aphp/eds-pseudo/tests.yml?branch=main&label=tests&style=flat-square" alt="Tests">]()
[<img style="display: inline" src="https://img.shields.io/github/actions/workflow/status/aphp/eds-pseudo/documentation.yml?branch=main&label=docs&style=flat-square" alt="Documentation">](https://aphp.github.io/eds-pseudo/latest/)
[<img style="display: inline" src="https://img.shields.io/codecov/c/github/aphp/eds-pseudo?logo=codecov&style=flat-square" alt="Codecov">](https://codecov.io/gh/aphp/eds-pseudo)
[<img style="display: inline" src="https://img.shields.io/badge/repro-poetry-blue?style=flat-square" alt="Poetry">](https://python-poetry.org)
[<img style="display: inline" src="https://img.shields.io/badge/repro-dvc-blue?style=flat-square" alt="DVC">](https://dvc.org)
[<img style="display: inline" src="https://img.shields.io/badge/demo%20%F0%9F%9A%80-streamlit-purple?style=flat-square" alt="Demo">](https://eds-pseudo-public.streamlit.app/)
</div>
# EDS-Pseudo
This project aims at detecting identifying entities documents, and was primarily tested
on clinical reports at AP-HP's Clinical Data Warehouse (EDS).
The model is built on top of [edsnlp](https://github.com/aphp/edsnlp), and consists in a
hybrid model (rule-based + deep learning) for which we provide
rules ([`eds-pseudo/pipes`](https://github.com/aphp/eds-pseudo/tree/main/eds_pseudo/pipes))
and a training recipe [`train.py`](https://github.com/aphp/eds-pseudo/blob/main/scripts/train.py).
We also provide some fictitious
templates ([`templates.txt`](https://github.com/aphp/eds-pseudo/blob/main/data/templates.txt)) and a script to
generate a synthetic
dataset [`generate_dataset.py`](https://github.com/aphp/eds-pseudo/blob/main/scripts/generate_dataset.py).
The entities that are detected are listed below.
| Label | Description |
|------------------|---------------------------------------------------------------|
| `ADRESSE` | Street address, eg `33 boulevard de Picpus` |
| `DATE` | Any absolute date other than a birthdate |
| `DATE_NAISSANCE` | Birthdate |
| `HOPITAL` | Hospital name, eg `Hôpital Rothschild` |
| `IPP` | Internal AP-HP identifier for patients, displayed as a number |
| `MAIL` | Email address |
| `NDA` | Internal AP-HP identifier for visits, displayed as a number |
| `NOM` | Any last name (patients, doctors, third parties) |
| `PRENOM` | Any first name (patients, doctors, etc) |
| `SECU` | Social security number |
| `TEL` | Any phone number |
| `VILLE` | Any city |
| `ZIP` | Any zip code |
## Downloading the public pre-trained model
The public pretrained model is available on the HuggingFace model hub at
[AP-HP/eds-pseudo-public](https://hf.co/AP-HP/eds-pseudo-public) and was trained on synthetic data
(see [`generate_dataset.py`](https://github.com/aphp/eds-pseudo/blob/main/scripts/generate_dataset.py)). You can also
test it directly on the **[demo](https://eds-pseudo-public.streamlit.app/)**.
1. Install the latest version of edsnlp
```shell
pip install "edsnlp[ml]" -U
```
2. Get access to the model at [AP-HP/eds-pseudo-public](https://hf.co/AP-HP/eds-pseudo-public)
3. Create and copy a huggingface token with permission **"READ"** at https://huggingface.co/settings/tokens?new_token=true
4. Register the token (only once) on your machine
```python
import huggingface_hub
huggingface_hub.login(token=YOUR_TOKEN, new_session=False, add_to_git_credential=True)
```
5. Load the model
```python
import edsnlp
nlp = edsnlp.load("AP-HP/eds-pseudo-public", auto_update=True)
doc = nlp(
"En 2015, M. Charles-François-Bienvenu "
"Myriel était évêque de Digne. C’était un vieillard "
"d’environ soixante-quinze ans ; il occupait le "
"siège de Digne depuis 2006."
)
for ent in doc.ents:
print(ent, ent.label_, str(ent._.date))
```
To apply the model on many documents using one or more GPUs, refer to the documentation
of [edsnlp](https://aphp.github.io/edsnlp/latest/tutorials/multiple-texts/).
## Metrics
| AP-HP Pseudo Test Token Scores | Precision | Recall | F1 | Redact | Redact Full |
|:---------------------------------|------------:|---------:|-----:|---------:|--------------:|
| ADRESSE | 98.2 | 96.9 | 97.6 | 97.6 | 96.7 |
| DATE | 99 | 98.4 | 98.7 | 98.8 | 85.9 |
| DATE_NAISSANCE | 97.5 | 96.9 | 97.2 | 99.3 | 99.4 |
| IPP | 91.9 | 90.8 | 91.3 | 98.5 | 99.3 |
| MAIL | 96.1 | 99.8 | 97.9 | 99.8 | 99.7 |
| NDA | 92.1 | 83.5 | 87.6 | 87.4 | 97.2 |
| NOM | 94.4 | 95.3 | 94.8 | 98.2 | 89.5 |
| PRENOM | 93.5 | 96.6 | 95 | 99 | 93.2 |
| SECU | 88.3 | 100 | 93.8 | 100 | 100 |
| TEL | 97.5 | 99.9 | 98.7 | 99.9 | 99.6 |
| VILLE | 96.7 | 93.8 | 95.2 | 95.1 | 91.1 |
| ZIP | 96.8 | 100 | 98.3 | 100 | 100 |
| micro | 97 | 97.8 | 97.4 | 98.8 | 63.1 |
## Installation to reproduce
If you'd like to reproduce eds-pseudo's training or contribute to its development, you should first clone it:
```shell
git clone https://github.com/aphp/eds-pseudo.git
cd eds-pseudo
```
And install the dependencies. We recommend pinning the library version in your projects, or use a strict package manager
like [Poetry](https://python-poetry.org/).
```shell
poetry install
```
## How to use without machine learning
```python
import edsnlp
nlp = edsnlp.blank("eds")
# Some text cleaning
nlp.add_pipe("eds.normalizer")
# Various simple rules
nlp.add_pipe(
"eds_pseudo.simple_rules",
config={"pattern_keys": ["TEL", "MAIL", "SECU", "PERSON"]},
)
# Address detection
nlp.add_pipe("eds_pseudo.addresses")
# Date detection
nlp.add_pipe("eds_pseudo.dates")
# Contextual rules (requires a dict of info about the patient)
nlp.add_pipe("eds_pseudo.context")
# Apply it to a text
doc = nlp(
"En 2015, M. Charles-François-Bienvenu "
"Myriel était évêque de Digne. C’était un vieillard "
"d’environ soixante-quinze ans ; il occupait le "
"siège de Digne depuis 2006."
)
for ent in doc.ents:
print(ent, ent.label_)
# 2015 DATE
# Charles-François-Bienvenu NOM
# Myriel PRENOM
# 2006 DATE
```
## How to train
Before training a model, you should update the
[configs/config.cfg](https://github.com/aphp/eds-pseudo/blob/main/configs/config.cfg) and
[pyproject.toml](https://github.com/aphp/eds-pseudo/blob/main/pyproject.toml) files to
fit your needs.
Put your data in the `data/dataset` folder (or edit the paths `configs/config.cfg` file to point
to `data/gen_dataset/train.jsonl`).
Then, run the training script
```shell
python scripts/train.py --config configs/config.cfg --seed 43
```
This will train a model and save it in `artifacts/model-last`. You can evaluate it on the test set (defaults
to `data/dataset/test.jsonl`) with:
```shell
python scripts/evaluate.py --config configs/config.cfg
```
To package it, run:
```shell
python scripts/package.py
```
This will create a `dist/eds-pseudo-aphp-***.whl` file that you can install with `pip install dist/eds-pseudo-aphp-***`.
You can use it in your code:
```python
import edsnlp
# Either from the model path directly
nlp = edsnlp.load("artifacts/model-last")
# Or from the wheel file
import eds_pseudo_aphp
nlp = eds_pseudo_aphp.load()
```
## Documentation
Visit the [documentation](https://aphp.github.io/eds-pseudo/) for more information!
## Publication
Please find our publication at the following link: https://doi.org/mkfv.
If you use EDS-Pseudo, please cite us as below:
```
@article{eds_pseudo,
title={Development and validation of a natural language processing algorithm to pseudonymize documents in the context of a clinical data warehouse},
author={Tannier, Xavier and Wajsb{\"u}rt, Perceval and Calliger, Alice and Dura, Basile and Mouchet, Alexandre and Hilka, Martin and Bey, Romain},
journal={Methods of Information in Medicine},
year={2024},
publisher={Georg Thieme Verlag KG}
}
```
## Acknowledgement
We would like to thank [Assistance Publique – Hôpitaux de Paris](https://www.aphp.fr/)
and [AP-HP Foundation](https://fondationrechercheaphp.fr/) for funding this project.
|
timm/resnest101e.in1k | timm | "2023-04-23T23:37:24Z" | 80,000 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2004.08955",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-23T23:36:40Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for resnest101e.in1k
A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 48.3
- GMACs: 13.4
- Activations (M): 28.7
- Image size: 256 x 256
- **Papers:**
- ResNeSt: Split-Attention Networks: https://arxiv.org/abs/2004.08955
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/zhanghang1989/ResNeSt
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnest101e.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest101e.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnest101e.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2004.08955},
year={2020}
}
```
|
wangqixun/YamerMIX_v8 | wangqixun | "2024-01-30T07:49:12Z" | 79,921 | 15 | diffusers | [
"diffusers",
"safetensors",
"license:mit",
"diffusers:StableDiffusionXLCommonPipeline",
"region:us"
] | null | "2024-01-22T11:18:57Z" | ---
license: mit
---
# Model
The model is from [civitai-Yamer](https://civitai.com/models/84040?modelVersionId=196039). This is a very excellent model!Thank you Yamer!
For business inquires, commercial licensing, custom models/commissions, large scale image captioning for datasets and consultation contact me under yamer@rundiffusion.com
![image/png](https://cdn-uploads.huggingface.co/production/uploads/643665d33193f279361cc292/yI0NH-NN08uVd6v1obZeu.png)
|
neulab/codebert-c | neulab | "2023-02-27T20:56:38Z" | 79,850 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2302.05527",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-09-25T16:46:36Z" | This is a `microsoft/codebert-base-mlm` model, trained for 1,000,000 steps (with `batch_size=32`) on **C** code from the `codeparrot/github-code-clean` dataset, on the masked-language-modeling task.
It is intended to be used in CodeBERTScore: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score), but can be used for any other model or task.
For more information, see: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score)
## Citation
If you use this model for research, please cite:
```
@article{zhou2023codebertscore,
url = {https://arxiv.org/abs/2302.05527},
author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham},
title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code},
publisher = {arXiv},
year = {2023},
}
``` |
stabilityai/sd-turbo | stabilityai | "2024-07-10T11:38:51Z" | 79,768 | 345 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-27T16:41:20Z" | ---
pipeline_tag: text-to-image
inference: false
---
# SD-Turbo Model Card
<!-- Provide a quick summary of what the model is/does. -->
![row01](output_tile.jpg)
SD-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation.
We release SD-Turbo as a research artifact, and to study small, distilled text-to-image models. For increased quality and prompt understanding,
we recommend [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
Please note: For commercial use, please refer to https://stability.ai/license.
## Model Details
### Model Description
SD-Turbo is a distilled version of [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1), trained for real-time synthesis.
SD-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational
image diffusion models in 1 to 4 steps at high image quality.
This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an
adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps.
- **Developed by:** Stability AI
- **Funded by:** Stability AI
- **Model type:** Generative text-to-image model
- **Finetuned from model:** [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
which implements the most popular diffusion frameworks (both training and inference).
- **Repository:** https://github.com/Stability-AI/generative-models
- **Paper:** https://stability.ai/research/adversarial-diffusion-distillation
- **Demo [for the bigger SDXL-Turbo]:** http://clipdrop.co/stable-diffusion-turbo
## Evaluation
![comparison1](image_quality_one_step.png)
![comparison2](prompt_alignment_one_step.png)
The charts above evaluate user preference for SD-Turbo over other single- and multi-step models.
SD-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-Lora XL and LCM-Lora 1.5.
**Note:** For increased quality, we recommend the bigger version [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation).
## Uses
### Direct Use
The model is intended for both non-commercial and commercial usage. Possible research areas and tasks include
- Research on generative models.
- Research on real-time applications of generative models.
- Research on the impact of real-time generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
For commercial use, please refer to https://stability.ai/membership.
Excluded uses are described below.
### Diffusers
```
pip install diffusers transformers accelerate --upgrade
```
- **Text-to-image**:
SD-Turbo does not make use of `guidance_scale` or `negative_prompt`, we disable it with `guidance_scale=0.0`.
Preferably, the model generates images of size 512x512 but higher image sizes work as well.
A **single step** is enough to generate high quality images.
```py
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sd-turbo", torch_dtype=torch.float16, variant="fp16")
pipe.to("cuda")
prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."
image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
```
- **Image-to-image**:
When using SD-Turbo for image-to-image generation, make sure that `num_inference_steps` * `strength` is larger or equal
to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, *e.g.* 0.5 * 2.0 = 1 step in our example
below.
```py
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sd-turbo", torch_dtype=torch.float16, variant="fp16")
pipe.to("cuda")
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512))
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0]
```
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events,
and therefore using the model to generate such content is out-of-scope for the abilities of this model.
The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
## Limitations and Bias
### Limitations
- The quality and prompt alignment is lower than that of [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo/).
- The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism.
- The model cannot render legible text.
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Recommendations
The model is intended for both non-commercial and commercial usage.
## How to Get Started with the Model
Check out https://github.com/Stability-AI/generative-models
|
cross-encoder/ms-marco-TinyBERT-L-2 | cross-encoder | "2021-08-05T08:39:52Z" | 79,625 | 16 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
---
# Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('model_name')
tokenizer = AutoTokenizer.from_pretrained('model_name')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
ALINEAR/albert-japanese-v2 | ALINEAR | "2020-05-04T13:20:53Z" | 79,567 | 2 | transformers | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | Entry not found |
timm/res2net101_26w_4s.in1k | timm | "2023-04-24T00:07:55Z" | 79,517 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
] | image-classification | "2023-04-24T00:07:15Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2net101_26w_4s.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 45.2
- GMACs: 8.1
- Activations (M): 18.4
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2net101_26w_4s.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net101_26w_4s.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net101_26w_4s.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
LoneStriker/Yarn-Llama-2-70b-32k-2.4bpw-h6-exl2 | LoneStriker | "2023-11-22T04:07:11Z" | 79,507 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"dataset:emozilla/yarn-train-tokenized-8k-llama",
"arxiv:2309.00071",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-22T03:54:33Z" | ---
metrics:
- perplexity
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- emozilla/yarn-train-tokenized-8k-llama
---
# Model Card: Yarn-Llama-2-70b-32k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)
![yarn](https://raw.githubusercontent.com/jquesnelle/yarn/70b/data/proofpile-long-small-32k-70b.csv.png)
The authors would like to thank [LAION AI](https://laion.ai/) for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.
## Model Description
Nous-Yarn-Llama-2-70b-32k is a state-of-the-art language model for long context, further pretrained on long context data for 400 steps using the YaRN extension method.
It is an extension of [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) and supports a 32k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Llama-2-70b-32k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 1k PPL | 2k PPL | 4k PPL | 8k PPL | 16k PPL | 32k PPL |
|-------|---------------:|-------:|--------:|------:|-------:|--------:|--------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 3.71 | 3.27 | 2.96 | - | - | - |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 3.61 | 3.22 | 2.91 | 2.82 | 2.45 | 2.23 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | MMLU | Truthful QA |
|-------|---------------:|------:|-----:|------------:|
| [Llama-2-70b-hf](meta-llama/Llama-2-70b-hf) | 4k | 67.32 | 69.83 | 44.92 |
| [Yarn-Llama-2-70b-32k](https://huggingface.co/NousResearch/Yarn-Llama-2-70b-32k) | 32k | 67.41 | 68.84 | 46.14 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
|
timm/dla102.in1k | timm | "2023-04-24T21:14:22Z" | 79,482 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.06484",
"license:bsd-3-clause",
"region:us"
] | image-classification | "2023-04-24T19:35:50Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for dla102.in1k
A DLA (Deep Layer Aggregation) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 33.3
- GMACs: 7.2
- Activations (M): 14.2
- Image size: 224 x 224
- **Papers:**
- Deep Layer Aggregation: https://arxiv.org/abs/1707.06484
- **Original:** https://github.com/ucbdrive/dla
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dla102.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla102.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla102.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2018deep,
title={Deep layer aggregation},
author={Yu, Fisher and Wang, Dequan and Shelhamer, Evan and Darrell, Trevor},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2018}
}
```
|
timm/res2next50.in1k | timm | "2023-04-24T00:08:47Z" | 79,446 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
] | image-classification | "2023-04-24T00:08:28Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2next50.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 24.7
- GMACs: 4.2
- Activations (M): 13.7
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2next50.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2next50.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2next50.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
hustvl/yolos-small | hustvl | "2024-05-08T07:49:12Z" | 79,400 | 59 | transformers | [
"transformers",
"pytorch",
"safetensors",
"yolos",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2106.00666",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2022-04-26T09:38:22Z" | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (small-sized) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 200 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Yntec/DreamPhotoGASM | Yntec | "2024-04-18T01:39:23Z" | 79,368 | 15 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Realism",
"Girls",
"epinikion",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-17T20:41:17Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Realism
- Girls
- epinikion
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
---
# DreamPhoto Gorgeous Advanced Sexy Model
Samples and prompts:
![Free AI image generator Dream Photo gasm](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/O6YzDgfpqdPlYvCxFmbeo.png)
(Click for larger)
Top left: Woman Amazon, Brazilian 1girl "amazon_woman face" wearing bikini and 1man "male face" wearing swim shorts, Scenario, official art, beautiful and aesthetic, colorful, realistic light, at night, intricate details, intricate details, hyperdetailed, dramatic light, cinematic, silhouette, smiling couple on a deck of a cruise ship, at night, looking at viewer
Top right: in rain, red shiny sports jersey, flirty, long sleeves,oversized soaking clothes, Freckles, long hair, ginger hair, white background, grey eyes, simple background, 1girl, full body,
Bottom left: Cute girl, pretty girl. features, body. Wide hips. Pale. Small, mousey nose. Closed mouth, small smile. Messy blonde hair, short. Wearing a teal two piece bikini, triangl brand. Short hair, pixie haircut. On sandy, rocky beach. Beautiful face, eyes. Blushing, sweaty, shy. Sunset. Natural lighting. 4k, high quality picture, sharp image. Pixie haircut. Bikini.
Bottom right: Dramatic cinematic 1980 movie still, handsome VALENTINE MAN kissing pretty woman with cleavage, classroom, school Uniforms, blackboard. Pinup. He wears a backpack, bokeh
epiCPhotoGASM by epinikion merged with artistic models to make your dreams come true!
Original page:
https://civitai.com/models/132632?modelVersionId=145885 |
timm/dpn107.mx_in1k | timm | "2023-04-21T22:00:16Z" | 79,191 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.01629",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-21T21:58:56Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for dpn107.mx_in1k
A DPN (Dual-Path Net) image classification model. Trained on ImageNet-1k in MXNet by paper authors and ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.9
- GMACs: 18.4
- Activations (M): 33.5
- Image size: 224 x 224
- **Papers:**
- Dual Path Networks: https://arxiv.org/abs/1707.01629
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/cypw/DPNs
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dpn107.mx_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dpn107.mx_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 376, 56, 56])
# torch.Size([1, 1152, 28, 28])
# torch.Size([1, 2432, 14, 14])
# torch.Size([1, 2688, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dpn107.mx_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2688, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{Chen2017,
title={Dual Path Networks},
author={Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng},
journal={arXiv preprint arXiv:1707.01629},
year={2017}
}
```
|
citizenlab/twitter-xlm-roberta-base-sentiment-finetunned | citizenlab | "2022-12-02T13:49:38Z" | 79,189 | 34 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
pipeline_type: "text-classification"
widget:
- text: "this is a lovely message"
example_title: "Example 1"
multi_class: false
- text: "you are an idiot and you and your family should go back to your country"
example_title: "Example 2"
multi_class: false
language:
- en
- nl
- fr
- pt
- it
- es
- de
- da
- pl
- af
datasets:
- jigsaw_toxicity_pred
metrics:
- F1 Accuracy
---
# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
This is multilingual XLM-Roberta model sequence classifier fine tunned and based on [Cardiff NLP Group](cardiffnlp/twitter-roberta-base-sentiment) sentiment classification model.
## How to use it
```python
from transformers import pipeline
model_path = "citizenlab/twitter-xlm-roberta-base-sentiment-finetunned"
sentiment_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)
sentiment_classifier("this is a lovely message")
> [{'label': 'Positive', 'score': 0.9918450713157654}]
sentiment_classifier("you are an idiot and you and your family should go back to your country")
> [{'label': 'Negative', 'score': 0.9849833846092224}]
```
## Evaluation
```
precision recall f1-score support
Negative 0.57 0.14 0.23 28
Neutral 0.78 0.94 0.86 132
Positive 0.89 0.80 0.85 51
accuracy 0.80 211
macro avg 0.75 0.63 0.64 211
weighted avg 0.78 0.80 0.77 211
```
|
transformersbook/pegasus-samsum | transformersbook | "2022-02-05T17:05:28Z" | 79,133 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-test
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb).
It achieves the following results on the evaluation set:
- Loss: 1.4875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7012 | 0.54 | 500 | 1.4875 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Epiculous/Violet_Twilight-v0.2-GGUF | Epiculous | "2024-10-13T12:35:15Z" | 79,098 | 18 | null | [
"gguf",
"merge",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:anthracite-org/kalo_opus_misc_240827",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2024-09-13T13:19:21Z" | ---
license: apache-2.0
datasets:
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/stheno-filtered-v1.1
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
pipeline_tag: text-generation
tags:
- merge
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/P962FQhRG4I8nbU_DJolY.png)
Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!
# Quants!
[full](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) / [exl2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2-exl2) / <strong>gguf</strong>
## Prompting
The v0.2 models are trained on ChatML, the prompting structure goes a little something like this:
```
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
```
### Context and Instruct
The v0.2 models are trained on ChatML, please use that Context and Instruct template.
### Current Top Sampler Settings
[Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
[Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
## Merging
The following config was used to merge Azure Dusk and Crimson Dawn
```yaml
slices:
- sources:
- model: Epiculous/Azure_Dusk-v0.2
layer_range: [0, 40]
- model: Epiculous/Crimson_Dawn-V0.2
layer_range: [0, 40]
merge_method: slerp
base_model: Epiculous/Azure_Dusk-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
timm/fbnetc_100.rmsp_in1k | timm | "2023-04-27T21:13:21Z" | 78,974 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1812.03443",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-12T23:59:14Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for fbnetc_100.rmsp_in1k
A FBNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* A simple RmsProp based recipe without RandAugment. Using RandomErasing, mixup, dropout, standard random-resize-crop augmentation.
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.6
- GMACs: 0.4
- Activations (M): 6.5
- Image size: 224 x 224
- **Papers:**
- FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search: https://arxiv.org/abs/1812.03443
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fbnetc_100.rmsp_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetc_100.rmsp_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 32, 28, 28])
# torch.Size([1, 112, 14, 14])
# torch.Size([1, 352, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetc_100.rmsp_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1984, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wu2019fbnet,
title={Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search},
author={Wu, Bichen and Dai, Xiaoliang and Zhang, Peizhao and Wang, Yanghan and Sun, Fei and Wu, Yiming and Tian, Yuandong and Vajda, Peter and Jia, Yangqing and Keutzer, Kurt},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10734--10742},
year={2019}
}
```
|
timm/inception_v3.gluon_in1k | timm | "2023-04-25T21:28:50Z" | 78,934 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1512.00567",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-25T21:28:32Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for inception_v3.gluon_in1k
A Inception-v3 image classification model. Trained on ImageNet-1k by MxNet GLUON authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 23.8
- GMACs: 5.7
- Activations (M): 9.0
- Image size: 299 x 299
- **Papers:**
- Rethinking the Inception Architecture for Computer Vision: https://arxiv.org/abs/1512.00567
- **Original:** https://github.com/tensorflow/models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('inception_v3.gluon_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_v3.gluon_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 147, 147])
# torch.Size([1, 192, 71, 71])
# torch.Size([1, 288, 35, 35])
# torch.Size([1, 768, 17, 17])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'inception_v3.gluon_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{DBLP:journals/corr/SzegedyVISW15,
author = {Christian Szegedy and
Vincent Vanhoucke and
Sergey Ioffe and
Jonathon Shlens and
Zbigniew Wojna},
title = {Rethinking the Inception Architecture for Computer Vision},
journal = {CoRR},
volume = {abs/1512.00567},
year = {2015},
url = {http://arxiv.org/abs/1512.00567},
archivePrefix = {arXiv},
eprint = {1512.00567},
timestamp = {Mon, 13 Aug 2018 16:49:07 +0200},
biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
timm/spnasnet_100.rmsp_in1k | timm | "2023-04-27T21:14:40Z" | 78,756 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.02877",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:01:11Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for spnasnet_100.rmsp_in1k
A SPNasNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* A simple RmsProp based recipe without RandAugment. Using RandomErasing, mixup, dropout, standard random-resize-crop augmentation.
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.4
- GMACs: 0.3
- Activations (M): 6.0
- Image size: 224 x 224
- **Papers:**
- Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours: https://arxiv.org/abs/1904.02877
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('spnasnet_100.rmsp_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'spnasnet_100.rmsp_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 96, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'spnasnet_100.rmsp_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{stamoulis2020single,
title={Single-path nas: Designing hardware-efficient convnets in less than 4 hours},
author={Stamoulis, Dimitrios and Ding, Ruizhou and Wang, Di and Lymberopoulos, Dimitrios and Priyantha, Bodhi and Liu, Jie and Marculescu, Diana},
booktitle={Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, W{"u}rzburg, Germany, September 16--20, 2019, Proceedings, Part II},
pages={481--497},
year={2020},
organization={Springer}
}
```
|
Helsinki-NLP/opus-mt-ja-en | Helsinki-NLP | "2023-08-16T11:59:08Z" | 78,488 | 53 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ja",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ja-en
* source languages: ja
* target languages: en
* OPUS readme: [ja-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ja.en | 41.7 | 0.589 |
|
timm/convit_base.fb_in1k | timm | "2023-04-24T04:14:31Z" | 78,248 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.10697",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-24T04:13:12Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for convit_base.fb_in1k
A ConViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.5
- GMACs: 17.5
- Activations (M): 31.8
- Image size: 224 x 224
- **Papers:**
- ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases: https://arxiv.org/abs/2103.10697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/convit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convit_base.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convit_base.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{d2021convit,
title={ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases},
author={d'Ascoli, St{'e}phane and Touvron, Hugo and Leavitt, Matthew and Morcos, Ari and Biroli, Giulio and Sagun, Levent},
journal={arXiv preprint arXiv:2103.10697},
year={2021}
}
```
|
timm/ghostnet_100.in1k | timm | "2023-08-20T06:13:05Z" | 77,932 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1911.11907",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-08-19T23:28:44Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for ghostnet_100.in1k
A GhostNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.2
- GMACs: 0.1
- Activations (M): 3.5
- Image size: 224 x 224
- **Papers:**
- GhostNet: More Features from Cheap Operations: https://arxiv.org/abs/1911.11907
- **Original:** https://github.com/huawei-noah/Efficient-AI-Backbones
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ghostnet_100.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnet_100.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 80, 14, 14])
# torch.Size([1, 160, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ghostnet_100.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 960, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{Han_2020_CVPR,
author = {Han, Kai and Wang, Yunhe and Tian, Qi and Guo, Jianyuan and Xu, Chunjing and Xu, Chang},
title = {GhostNet: More Features From Cheap Operations},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
```
|
timm/botnet26t_256.c1_in1k | timm | "2023-04-26T16:09:11Z" | 77,830 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2101.11605",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T16:08:51Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for botnet26t_256.c1_in1k
A BotNet image classification model (, based on ResNet architecture). Trained on ImageNet-1k in `timm` by Ross Wightman.
NOTE: this model did not adhere to any specific paper configuration, it was tuned for reasonable training times and reduced frequency of self-attention blocks.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping).
* Cosine LR schedule with warmup
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOB (with BYOANet attention specific blocks) allows configuration of:
* block / stage layout
* block-type interleaving
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.5
- GMACs: 3.3
- Activations (M): 12.0
- Image size: 256 x 256
- **Papers:**
- Bottleneck Transformers for Visual Recognition: https://arxiv.org/abs/2101.11605
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('botnet26t_256.c1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'botnet26t_256.c1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'botnet26t_256.c1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Srinivas2021BottleneckTF,
title={Bottleneck Transformers for Visual Recognition},
author={A. Srinivas and Tsung-Yi Lin and Niki Parmar and Jonathon Shlens and P. Abbeel and Ashish Vaswani},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={16514-16524}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
facebook/wav2vec2-large-960h | facebook | "2022-04-05T16:40:42Z" | 77,759 | 25 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.8 | 6.3 | |
timm/cait_m36_384.fb_dist_in1k | timm | "2024-02-10T23:43:00Z" | 77,542 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.17239",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-13T01:37:00Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for cait_m36_384.fb_dist_in1k
A CaiT (Class-Attention in Image Transformers) image classification model. Pretrained on ImageNet-1k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 271.2
- GMACs: 173.1
- Activations (M): 734.8
- Image size: 384 x 384
- **Papers:**
- Going deeper with Image Transformers: https://arxiv.org/abs/2103.17239
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/deit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('cait_m36_384.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'cait_m36_384.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{Touvron_2021_ICCV,
author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J'egou, Herv'e},
title = {Going Deeper With Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {32-42}
}
```
|
asapp/sew-d-tiny-100k-ft-ls100h | asapp | "2023-06-15T19:07:05Z" | 77,436 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"sew-d",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-tiny-100k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.47
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 22.73
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 10.47 | 22.73 |
|
timm/convmixer_768_32.in1k | timm | "2023-04-24T03:13:30Z" | 77,404 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.09792",
"license:mit",
"region:us"
] | image-classification | "2023-04-24T03:13:13Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for convmixer_768_32.in1k
A ConvMixer image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.1
- GMACs: 19.5
- Activations (M): 26.0
- Image size: 224 x 224
- **Papers:**
- Patches Are All You Need?: https://arxiv.org/abs/2201.09792
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/locuslab/convmixer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convmixer_768_32.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convmixer_768_32.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 32, 32) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Chen2021CrossViTCM,
title={CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification},
author={Chun-Fu Chen and Quanfu Fan and Rameswar Panda},
journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021},
pages={347-356}
}
```
|
persiannlp/mt5-small-parsinlu-opus-translation_fa_en | persiannlp | "2021-09-23T16:20:36Z" | 77,395 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"machine-translation",
"persian",
"farsi",
"fa",
"multilingual",
"dataset:parsinlu",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
timm/mixer_b16_224.goog_in21k_ft_in1k | timm | "2024-02-10T23:36:20Z" | 77,297 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2105.01601",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-27T23:02:23Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for mixer_b16_224.goog_in21k_ft_in1k
A MLP-Mixer image classification model. Pretrained on ImageNet-21k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 59.9
- GMACs: 12.6
- Activations (M): 14.5
- Image size: 224 x 224
- **Papers:**
- MLP-Mixer: An all-MLP Architecture for Vision: https://arxiv.org/abs/2105.01601
- **Original:** https://github.com/google-research/vision_transformers
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mixer_b16_224.goog_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mixer_b16_224.goog_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{tolstikhin2021mixer,
title={MLP-Mixer: An all-MLP Architecture for Vision},
author={Tolstikhin, Ilya and Houlsby, Neil and Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Unterthiner, Thomas and Yung, Jessica and Steiner, Andreas and Keysers, Daniel and Uszkoreit, Jakob and Lucic, Mario and Dosovitskiy, Alexey},
journal={arXiv preprint arXiv:2105.01601},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/coat_lite_mini.in1k | timm | "2023-04-24T03:43:16Z" | 77,241 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.06399",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-24T03:43:09Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coat_lite_mini.in1k
A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.0
- GMACs: 2.0
- Activations (M): 12.2
- Image size: 224 x 224
- **Papers:**
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mlpc-ucsd/CoaT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coat_lite_mini.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coat_lite_mini.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{Xu_2021_ICCV,
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
title = {Co-Scale Conv-Attentional Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {9981-9990}
}
```
|
nvidia/NV-Embed-v2 | nvidia | "2024-11-07T01:04:58Z" | 77,188 | 206 | transformers | [
"transformers",
"safetensors",
"nvembed",
"feature-extraction",
"mteb",
"sentence-transformers",
"custom_code",
"en",
"arxiv:2405.17428",
"arxiv:2407.15831",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] | feature-extraction | "2024-08-29T13:00:32Z" | ---
tags:
- mteb
- sentence-transformers
model-index:
- name: NV-Embed-v2
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 94.28358208955224
- type: accuracy_stderr
value: 0.40076780842082305
- type: ap
value: 76.49097318319616
- type: ap_stderr
value: 1.2418692675183929
- type: f1
value: 91.41982003001168
- type: f1_stderr
value: 0.5043921413093579
- type: main_score
value: 94.28358208955224
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 97.74185000000001
- type: accuracy_stderr
value: 0.07420471683120942
- type: ap
value: 96.4737144875525
- type: ap_stderr
value: 0.2977518241541558
- type: f1
value: 97.7417581594921
- type: f1_stderr
value: 0.07428763617010377
- type: main_score
value: 97.74185000000001
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 63.96000000000001
- type: accuracy_stderr
value: 1.815555011559825
- type: f1
value: 62.49361841640459
- type: f1_stderr
value: 2.829339314126457
- type: main_score
value: 63.96000000000001
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: map_at_1
value: 46.515
- type: map_at_10
value: 62.392
- type: map_at_100
value: 62.732
- type: map_at_1000
value: 62.733000000000004
- type: map_at_3
value: 58.701
- type: map_at_5
value: 61.027
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 46.515
- type: ndcg_at_10
value: 70.074
- type: ndcg_at_100
value: 71.395
- type: ndcg_at_1000
value: 71.405
- type: ndcg_at_3
value: 62.643
- type: ndcg_at_5
value: 66.803
- type: precision_at_1
value: 46.515
- type: precision_at_10
value: 9.41
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.68
- type: precision_at_5
value: 16.814
- type: recall_at_1
value: 46.515
- type: recall_at_10
value: 94.097
- type: recall_at_100
value: 99.57300000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 74.03999999999999
- type: recall_at_5
value: 84.068
- type: main_score
value: 70.074
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 55.79933795955242
- type: v_measure
value: 55.79933795955242
- type: v_measure_std
value: 14.575108141916148
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 51.262845995850334
- type: v_measure
value: 51.262845995850334
- type: v_measure_std
value: 14.727824473104173
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 67.46477327480808
- type: mrr
value: 79.50160488941653
- type: main_score
value: 67.46477327480808
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 89.74311007980987
- type: cosine_spearman
value: 87.41644967443246
- type: manhattan_pearson
value: 88.57457108347744
- type: manhattan_spearman
value: 87.59295972042997
- type: euclidean_pearson
value: 88.27108977118459
- type: euclidean_spearman
value: 87.41644967443246
- type: main_score
value: 87.41644967443246
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 92.41558441558443
- type: accuracy_stderr
value: 0.37701502251934443
- type: f1
value: 92.38130170447671
- type: f1_stderr
value: 0.39115151225617767
- type: main_score
value: 92.41558441558443
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 54.08649516394218
- type: v_measure
value: 54.08649516394218
- type: v_measure_std
value: 0.5303233693045373
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 49.60352214167779
- type: v_measure
value: 49.60352214167779
- type: v_measure_std
value: 0.7176198612516721
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: map_at_1
value: 31.913249999999998
- type: map_at_10
value: 43.87733333333334
- type: map_at_100
value: 45.249916666666664
- type: map_at_1000
value: 45.350583333333326
- type: map_at_3
value: 40.316833333333335
- type: map_at_5
value: 42.317083333333336
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 38.30616666666667
- type: ndcg_at_10
value: 50.24175000000001
- type: ndcg_at_100
value: 55.345333333333336
- type: ndcg_at_1000
value: 56.91225000000001
- type: ndcg_at_3
value: 44.67558333333333
- type: ndcg_at_5
value: 47.32333333333334
- type: precision_at_1
value: 38.30616666666667
- type: precision_at_10
value: 9.007416666666666
- type: precision_at_100
value: 1.3633333333333333
- type: precision_at_1000
value: 0.16691666666666666
- type: precision_at_3
value: 20.895666666666667
- type: precision_at_5
value: 14.871666666666666
- type: recall_at_1
value: 31.913249999999998
- type: recall_at_10
value: 64.11891666666666
- type: recall_at_100
value: 85.91133333333333
- type: recall_at_1000
value: 96.28225
- type: recall_at_3
value: 48.54749999999999
- type: recall_at_5
value: 55.44283333333334
- type: main_score
value: 50.24175000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: map_at_1
value: 19.556
- type: map_at_10
value: 34.623
- type: map_at_100
value: 36.97
- type: map_at_1000
value: 37.123
- type: map_at_3
value: 28.904999999999998
- type: map_at_5
value: 31.955
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 44.104
- type: ndcg_at_10
value: 45.388
- type: ndcg_at_100
value: 52.793
- type: ndcg_at_1000
value: 55.108999999999995
- type: ndcg_at_3
value: 38.604
- type: ndcg_at_5
value: 40.806
- type: precision_at_1
value: 44.104
- type: precision_at_10
value: 14.143
- type: precision_at_100
value: 2.2190000000000003
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.316
- type: precision_at_5
value: 21.98
- type: recall_at_1
value: 19.556
- type: recall_at_10
value: 52.120999999999995
- type: recall_at_100
value: 76.509
- type: recall_at_1000
value: 89.029
- type: recall_at_3
value: 34.919
- type: recall_at_5
value: 42.18
- type: main_score
value: 45.388
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: map_at_1
value: 10.714
- type: map_at_10
value: 25.814999999999998
- type: map_at_100
value: 37.845
- type: map_at_1000
value: 39.974
- type: map_at_3
value: 17.201
- type: map_at_5
value: 21.062
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 66.0
- type: ndcg_at_10
value: 53.496
- type: ndcg_at_100
value: 58.053
- type: ndcg_at_1000
value: 64.886
- type: ndcg_at_3
value: 57.656
- type: ndcg_at_5
value: 55.900000000000006
- type: precision_at_1
value: 77.25
- type: precision_at_10
value: 43.65
- type: precision_at_100
value: 13.76
- type: precision_at_1000
value: 2.5940000000000003
- type: precision_at_3
value: 61.0
- type: precision_at_5
value: 54.65
- type: recall_at_1
value: 10.714
- type: recall_at_10
value: 31.173000000000002
- type: recall_at_100
value: 63.404
- type: recall_at_1000
value: 85.874
- type: recall_at_3
value: 18.249000000000002
- type: recall_at_5
value: 23.69
- type: main_score
value: 53.496
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 93.38499999999999
- type: accuracy_stderr
value: 0.13793114224133846
- type: f1
value: 90.12141028353496
- type: f1_stderr
value: 0.174640257706043
- type: main_score
value: 93.38499999999999
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: map_at_1
value: 84.66900000000001
- type: map_at_10
value: 91.52799999999999
- type: map_at_100
value: 91.721
- type: map_at_1000
value: 91.73
- type: map_at_3
value: 90.752
- type: map_at_5
value: 91.262
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 91.20899999999999
- type: ndcg_at_10
value: 93.74900000000001
- type: ndcg_at_100
value: 94.279
- type: ndcg_at_1000
value: 94.408
- type: ndcg_at_3
value: 92.923
- type: ndcg_at_5
value: 93.376
- type: precision_at_1
value: 91.20899999999999
- type: precision_at_10
value: 11.059
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.129
- type: precision_at_5
value: 21.617
- type: recall_at_1
value: 84.66900000000001
- type: recall_at_10
value: 97.03399999999999
- type: recall_at_100
value: 98.931
- type: recall_at_1000
value: 99.65899999999999
- type: recall_at_3
value: 94.76299999999999
- type: recall_at_5
value: 95.968
- type: main_score
value: 93.74900000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: map_at_1
value: 34.866
- type: map_at_10
value: 58.06099999999999
- type: map_at_100
value: 60.028999999999996
- type: map_at_1000
value: 60.119
- type: map_at_3
value: 51.304
- type: map_at_5
value: 55.054
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 64.815
- type: ndcg_at_10
value: 65.729
- type: ndcg_at_100
value: 71.14
- type: ndcg_at_1000
value: 72.336
- type: ndcg_at_3
value: 61.973
- type: ndcg_at_5
value: 62.858000000000004
- type: precision_at_1
value: 64.815
- type: precision_at_10
value: 17.87
- type: precision_at_100
value: 2.373
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 41.152
- type: precision_at_5
value: 29.568
- type: recall_at_1
value: 34.866
- type: recall_at_10
value: 72.239
- type: recall_at_100
value: 91.19
- type: recall_at_1000
value: 98.154
- type: recall_at_3
value: 56.472
- type: recall_at_5
value: 63.157
- type: main_score
value: 65.729
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: map_at_1
value: 44.651999999999994
- type: map_at_10
value: 79.95100000000001
- type: map_at_100
value: 80.51700000000001
- type: map_at_1000
value: 80.542
- type: map_at_3
value: 77.008
- type: map_at_5
value: 78.935
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 89.305
- type: ndcg_at_10
value: 85.479
- type: ndcg_at_100
value: 87.235
- type: ndcg_at_1000
value: 87.669
- type: ndcg_at_3
value: 81.648
- type: ndcg_at_5
value: 83.88600000000001
- type: precision_at_1
value: 89.305
- type: precision_at_10
value: 17.807000000000002
- type: precision_at_100
value: 1.9140000000000001
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 53.756
- type: precision_at_5
value: 34.018
- type: recall_at_1
value: 44.651999999999994
- type: recall_at_10
value: 89.034
- type: recall_at_100
value: 95.719
- type: recall_at_1000
value: 98.535
- type: recall_at_3
value: 80.635
- type: recall_at_5
value: 85.044
- type: main_score
value: 85.479
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 97.1376
- type: accuracy_stderr
value: 0.04571914259913447
- type: ap
value: 95.92783808558808
- type: ap_stderr
value: 0.05063782483358255
- type: f1
value: 97.13755519177172
- type: f1_stderr
value: 0.04575943074086138
- type: main_score
value: 97.1376
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: map_at_1
value: 0.0
- type: map_at_10
value: 38.342
- type: map_at_100
value: 0.0
- type: map_at_1000
value: 0.0
- type: map_at_3
value: 0.0
- type: map_at_5
value: 0.0
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 0.0
- type: ndcg_at_10
value: 45.629999999999995
- type: ndcg_at_100
value: 0.0
- type: ndcg_at_1000
value: 0.0
- type: ndcg_at_3
value: 0.0
- type: ndcg_at_5
value: 0.0
- type: precision_at_1
value: 0.0
- type: precision_at_10
value: 7.119000000000001
- type: precision_at_100
value: 0.0
- type: precision_at_1000
value: 0.0
- type: precision_at_3
value: 0.0
- type: precision_at_5
value: 0.0
- type: recall_at_1
value: 0.0
- type: recall_at_10
value: 67.972
- type: recall_at_100
value: 0.0
- type: recall_at_1000
value: 0.0
- type: recall_at_3
value: 0.0
- type: recall_at_5
value: 0.0
- type: main_score
value: 45.629999999999995
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 99.24988600091199
- type: accuracy_stderr
value: 0.04496826931900734
- type: f1
value: 99.15933275095276
- type: f1_stderr
value: 0.05565039139747446
- type: main_score
value: 99.24988600091199
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 94.3684450524396
- type: accuracy_stderr
value: 0.8436548701322188
- type: f1
value: 77.33022623133307
- type: f1_stderr
value: 0.9228425861187275
- type: main_score
value: 94.3684450524396
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 86.09616677874916
- type: accuracy_stderr
value: 0.9943208055590853
- type: f1
value: 83.4902056490062
- type: f1_stderr
value: 0.7626189310074184
- type: main_score
value: 86.09616677874916
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 92.17215870880968
- type: accuracy_stderr
value: 0.25949941333658166
- type: f1
value: 91.36757392422702
- type: f1_stderr
value: 0.29139507298154815
- type: main_score
value: 92.17215870880968
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 46.09497344077905
- type: v_measure
value: 46.09497344077905
- type: v_measure_std
value: 1.44871520869784
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 44.861049989560684
- type: v_measure
value: 44.861049989560684
- type: v_measure_std
value: 1.432199293162203
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
split: test
type: mteb/mind_small
metrics:
- type: map
value: 31.75936162919999
- type: mrr
value: 32.966812736541236
- type: main_score
value: 31.75936162919999
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: map_at_1
value: 7.893999999999999
- type: map_at_10
value: 17.95
- type: map_at_100
value: 23.474
- type: map_at_1000
value: 25.412000000000003
- type: map_at_3
value: 12.884
- type: map_at_5
value: 15.171000000000001
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 55.728
- type: ndcg_at_10
value: 45.174
- type: ndcg_at_100
value: 42.18
- type: ndcg_at_1000
value: 50.793
- type: ndcg_at_3
value: 50.322
- type: ndcg_at_5
value: 48.244
- type: precision_at_1
value: 57.276
- type: precision_at_10
value: 33.437
- type: precision_at_100
value: 10.671999999999999
- type: precision_at_1000
value: 2.407
- type: precision_at_3
value: 46.646
- type: precision_at_5
value: 41.672
- type: recall_at_1
value: 7.893999999999999
- type: recall_at_10
value: 22.831000000000003
- type: recall_at_100
value: 43.818
- type: recall_at_1000
value: 75.009
- type: recall_at_3
value: 14.371
- type: recall_at_5
value: 17.752000000000002
- type: main_score
value: 45.174
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: map_at_1
value: 49.351
- type: map_at_10
value: 66.682
- type: map_at_100
value: 67.179
- type: map_at_1000
value: 67.18499999999999
- type: map_at_3
value: 62.958999999999996
- type: map_at_5
value: 65.364
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 55.417
- type: ndcg_at_10
value: 73.568
- type: ndcg_at_100
value: 75.35
- type: ndcg_at_1000
value: 75.478
- type: ndcg_at_3
value: 67.201
- type: ndcg_at_5
value: 70.896
- type: precision_at_1
value: 55.417
- type: precision_at_10
value: 11.036999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 29.654000000000003
- type: precision_at_5
value: 20.006
- type: recall_at_1
value: 49.351
- type: recall_at_10
value: 91.667
- type: recall_at_100
value: 98.89
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 75.715
- type: recall_at_5
value: 84.072
- type: main_score
value: 73.568
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: map_at_1
value: 71.358
- type: map_at_10
value: 85.474
- type: map_at_100
value: 86.101
- type: map_at_1000
value: 86.114
- type: map_at_3
value: 82.562
- type: map_at_5
value: 84.396
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.035
- type: ndcg_at_100
value: 90.17399999999999
- type: ndcg_at_1000
value: 90.243
- type: ndcg_at_3
value: 86.32300000000001
- type: ndcg_at_5
value: 87.85
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.55
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.89
- type: precision_at_5
value: 24.9
- type: recall_at_1
value: 71.358
- type: recall_at_10
value: 95.855
- type: recall_at_100
value: 99.711
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.02
- type: recall_at_5
value: 92.378
- type: main_score
value: 89.035
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 71.0984522742521
- type: v_measure
value: 71.0984522742521
- type: v_measure_std
value: 3.5668139917058044
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 74.94499641904133
- type: v_measure
value: 74.94499641904133
- type: v_measure_std
value: 11.419672879389248
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: map_at_1
value: 5.343
- type: map_at_10
value: 13.044
- type: map_at_100
value: 15.290999999999999
- type: map_at_1000
value: 15.609
- type: map_at_3
value: 9.227
- type: map_at_5
value: 11.158
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 26.3
- type: ndcg_at_10
value: 21.901
- type: ndcg_at_100
value: 30.316
- type: ndcg_at_1000
value: 35.547000000000004
- type: ndcg_at_3
value: 20.560000000000002
- type: ndcg_at_5
value: 18.187
- type: precision_at_1
value: 26.3
- type: precision_at_10
value: 11.34
- type: precision_at_100
value: 2.344
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.967
- type: precision_at_5
value: 15.920000000000002
- type: recall_at_1
value: 5.343
- type: recall_at_10
value: 22.997
- type: recall_at_100
value: 47.562
- type: recall_at_1000
value: 72.94500000000001
- type: recall_at_3
value: 11.533
- type: recall_at_5
value: 16.148
- type: main_score
value: 21.901
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 87.3054603493591
- type: cosine_spearman
value: 82.14763206055602
- type: manhattan_pearson
value: 84.78737790237557
- type: manhattan_spearman
value: 81.88455356002758
- type: euclidean_pearson
value: 85.00668629311117
- type: euclidean_spearman
value: 82.14763037860851
- type: main_score
value: 82.14763206055602
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 86.6911864687294
- type: cosine_spearman
value: 77.89286260403269
- type: manhattan_pearson
value: 82.87240347680857
- type: manhattan_spearman
value: 78.10055393740326
- type: euclidean_pearson
value: 82.72282535777123
- type: euclidean_spearman
value: 77.89256648406325
- type: main_score
value: 77.89286260403269
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 87.7220832598633
- type: cosine_spearman
value: 88.30238972017452
- type: manhattan_pearson
value: 87.88214789140248
- type: manhattan_spearman
value: 88.24770220032391
- type: euclidean_pearson
value: 87.98610386257103
- type: euclidean_spearman
value: 88.30238972017452
- type: main_score
value: 88.30238972017452
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 85.70614623247714
- type: cosine_spearman
value: 84.29920990970672
- type: manhattan_pearson
value: 84.9836190531721
- type: manhattan_spearman
value: 84.40933470597638
- type: euclidean_pearson
value: 84.96652336693347
- type: euclidean_spearman
value: 84.29920989531965
- type: main_score
value: 84.29920990970672
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 88.4169972425264
- type: cosine_spearman
value: 89.03555007807218
- type: manhattan_pearson
value: 88.83068699455478
- type: manhattan_spearman
value: 89.21877175674125
- type: euclidean_pearson
value: 88.7251052947544
- type: euclidean_spearman
value: 89.03557389893083
- type: main_score
value: 89.03555007807218
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 85.63830579034632
- type: cosine_spearman
value: 86.77353371581373
- type: manhattan_pearson
value: 86.24830492396637
- type: manhattan_spearman
value: 86.96754348626189
- type: euclidean_pearson
value: 86.09837038778359
- type: euclidean_spearman
value: 86.77353371581373
- type: main_score
value: 86.77353371581373
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 91.2204675588959
- type: cosine_spearman
value: 90.66976712249057
- type: manhattan_pearson
value: 91.11007808242346
- type: manhattan_spearman
value: 90.51739232964488
- type: euclidean_pearson
value: 91.19588941007903
- type: euclidean_spearman
value: 90.66976712249057
- type: main_score
value: 90.66976712249057
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 69.34416749707114
- type: cosine_spearman
value: 68.11632448161046
- type: manhattan_pearson
value: 68.99243488935281
- type: manhattan_spearman
value: 67.8398546438258
- type: euclidean_pearson
value: 69.06376010216088
- type: euclidean_spearman
value: 68.11632448161046
- type: main_score
value: 68.11632448161046
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 88.10309739429758
- type: cosine_spearman
value: 88.40520383147418
- type: manhattan_pearson
value: 88.50753383813232
- type: manhattan_spearman
value: 88.66382629460927
- type: euclidean_pearson
value: 88.35050664609376
- type: euclidean_spearman
value: 88.40520383147418
- type: main_score
value: 88.40520383147418
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 87.58627126942797
- type: mrr
value: 97.01098103058887
- type: main_score
value: 87.58627126942797
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: map_at_1
value: 62.883
- type: map_at_10
value: 75.371
- type: map_at_100
value: 75.66000000000001
- type: map_at_1000
value: 75.667
- type: map_at_3
value: 72.741
- type: map_at_5
value: 74.74
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 66.0
- type: ndcg_at_10
value: 80.12700000000001
- type: ndcg_at_100
value: 81.291
- type: ndcg_at_1000
value: 81.464
- type: ndcg_at_3
value: 76.19
- type: ndcg_at_5
value: 78.827
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.117
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 30.333
- type: precision_at_5
value: 20.133000000000003
- type: recall_at_1
value: 62.883
- type: recall_at_10
value: 93.556
- type: recall_at_100
value: 98.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 83.322
- type: recall_at_5
value: 89.756
- type: main_score
value: 80.12700000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cos_sim_accuracy
value: 99.87524752475248
- type: cos_sim_accuracy_threshold
value: 74.86587762832642
- type: cos_sim_ap
value: 97.02222446606328
- type: cos_sim_f1
value: 93.66197183098592
- type: cos_sim_f1_threshold
value: 74.74223375320435
- type: cos_sim_precision
value: 94.23076923076923
- type: cos_sim_recall
value: 93.10000000000001
- type: dot_accuracy
value: 99.87524752475248
- type: dot_accuracy_threshold
value: 74.86587762832642
- type: dot_ap
value: 97.02222688043362
- type: dot_f1
value: 93.66197183098592
- type: dot_f1_threshold
value: 74.74223375320435
- type: dot_precision
value: 94.23076923076923
- type: dot_recall
value: 93.10000000000001
- type: euclidean_accuracy
value: 99.87524752475248
- type: euclidean_accuracy_threshold
value: 70.9000825881958
- type: euclidean_ap
value: 97.02222446606329
- type: euclidean_f1
value: 93.66197183098592
- type: euclidean_f1_threshold
value: 71.07426524162292
- type: euclidean_precision
value: 94.23076923076923
- type: euclidean_recall
value: 93.10000000000001
- type: manhattan_accuracy
value: 99.87623762376238
- type: manhattan_accuracy_threshold
value: 3588.5040283203125
- type: manhattan_ap
value: 97.09194643777883
- type: manhattan_f1
value: 93.7375745526839
- type: manhattan_f1_threshold
value: 3664.3760681152344
- type: manhattan_precision
value: 93.18181818181817
- type: manhattan_recall
value: 94.3
- type: max_accuracy
value: 99.87623762376238
- type: max_ap
value: 97.09194643777883
- type: max_f1
value: 93.7375745526839
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 82.10134099988541
- type: v_measure
value: 82.10134099988541
- type: v_measure_std
value: 2.7926349897769533
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 48.357450742397404
- type: v_measure
value: 48.357450742397404
- type: v_measure_std
value: 1.520118876440547
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 55.79277200802986
- type: mrr
value: 56.742517082590616
- type: main_score
value: 55.79277200802986
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_spearman
value: 30.701215774712693
- type: cosine_pearson
value: 31.26740037278488
- type: dot_spearman
value: 30.701215774712693
- type: dot_pearson
value: 31.267404144879997
- type: main_score
value: 30.701215774712693
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: map_at_1
value: 0.23800000000000002
- type: map_at_10
value: 2.31
- type: map_at_100
value: 15.495000000000001
- type: map_at_1000
value: 38.829
- type: map_at_3
value: 0.72
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 91.0
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 71.39
- type: ndcg_at_1000
value: 64.153
- type: ndcg_at_3
value: 89.877
- type: ndcg_at_5
value: 89.562
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 92.60000000000001
- type: precision_at_100
value: 73.74000000000001
- type: precision_at_1000
value: 28.222
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 93.60000000000001
- type: recall_at_1
value: 0.23800000000000002
- type: recall_at_10
value: 2.428
- type: recall_at_100
value: 18.099999999999998
- type: recall_at_1000
value: 60.79599999999999
- type: recall_at_3
value: 0.749
- type: recall_at_5
value: 1.238
- type: main_score
value: 88.442
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: map_at_1
value: 3.4939999999999998
- type: map_at_10
value: 12.531999999999998
- type: map_at_100
value: 19.147
- type: map_at_1000
value: 20.861
- type: map_at_3
value: 7.558
- type: map_at_5
value: 9.49
- type: mrr_at_1
value: 0.0
- type: mrr_at_10
value: 0.0
- type: mrr_at_100
value: 0.0
- type: mrr_at_1000
value: 0.0
- type: mrr_at_3
value: 0.0
- type: mrr_at_5
value: 0.0
- type: ndcg_at_1
value: 47.959
- type: ndcg_at_10
value: 31.781
- type: ndcg_at_100
value: 42.131
- type: ndcg_at_1000
value: 53.493
- type: ndcg_at_3
value: 39.204
- type: ndcg_at_5
value: 34.635
- type: precision_at_1
value: 48.980000000000004
- type: precision_at_10
value: 27.143
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.584
- type: precision_at_3
value: 38.775999999999996
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.4939999999999998
- type: recall_at_10
value: 18.895
- type: recall_at_100
value: 50.192
- type: recall_at_1000
value: 85.167
- type: recall_at_3
value: 8.703
- type: recall_at_5
value: 11.824
- type: main_score
value: 31.781
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 92.7402
- type: accuracy_stderr
value: 1.020764595781027
- type: ap
value: 44.38594756333084
- type: ap_stderr
value: 1.817150701258273
- type: f1
value: 79.95699280019547
- type: f1_stderr
value: 1.334582498702029
- type: main_score
value: 92.7402
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 80.86870401810978
- type: accuracy_stderr
value: 0.22688467782004712
- type: f1
value: 81.1829040745744
- type: f1_stderr
value: 0.19774920574849694
- type: main_score
value: 80.86870401810978
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 64.82048869927482
- type: v_measure
value: 64.82048869927482
- type: v_measure_std
value: 0.9170394252450564
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cos_sim_accuracy
value: 88.44251057996067
- type: cos_sim_accuracy_threshold
value: 70.2150285243988
- type: cos_sim_ap
value: 81.11422351199913
- type: cos_sim_f1
value: 73.71062868615887
- type: cos_sim_f1_threshold
value: 66.507488489151
- type: cos_sim_precision
value: 70.2799712849964
- type: cos_sim_recall
value: 77.4934036939314
- type: dot_accuracy
value: 88.44251057996067
- type: dot_accuracy_threshold
value: 70.2150285243988
- type: dot_ap
value: 81.11420529068658
- type: dot_f1
value: 73.71062868615887
- type: dot_f1_threshold
value: 66.50749444961548
- type: dot_precision
value: 70.2799712849964
- type: dot_recall
value: 77.4934036939314
- type: euclidean_accuracy
value: 88.44251057996067
- type: euclidean_accuracy_threshold
value: 77.18156576156616
- type: euclidean_ap
value: 81.11422421732487
- type: euclidean_f1
value: 73.71062868615887
- type: euclidean_f1_threshold
value: 81.84436559677124
- type: euclidean_precision
value: 70.2799712849964
- type: euclidean_recall
value: 77.4934036939314
- type: manhattan_accuracy
value: 88.26369434344639
- type: manhattan_accuracy_threshold
value: 3837.067413330078
- type: manhattan_ap
value: 80.81442360477725
- type: manhattan_f1
value: 73.39883099117024
- type: manhattan_f1_threshold
value: 4098.833847045898
- type: manhattan_precision
value: 69.41896024464832
- type: manhattan_recall
value: 77.86279683377309
- type: max_accuracy
value: 88.44251057996067
- type: max_ap
value: 81.11422421732487
- type: max_f1
value: 73.71062868615887
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cos_sim_accuracy
value: 90.03182365040556
- type: cos_sim_accuracy_threshold
value: 64.46443796157837
- type: cos_sim_ap
value: 87.86649113691112
- type: cos_sim_f1
value: 80.45644844577821
- type: cos_sim_f1_threshold
value: 61.40774488449097
- type: cos_sim_precision
value: 77.54052702992216
- type: cos_sim_recall
value: 83.60024638127503
- type: dot_accuracy
value: 90.03182365040556
- type: dot_accuracy_threshold
value: 64.46444988250732
- type: dot_ap
value: 87.86649011954319
- type: dot_f1
value: 80.45644844577821
- type: dot_f1_threshold
value: 61.407750844955444
- type: dot_precision
value: 77.54052702992216
- type: dot_recall
value: 83.60024638127503
- type: euclidean_accuracy
value: 90.03182365040556
- type: euclidean_accuracy_threshold
value: 84.30368900299072
- type: euclidean_ap
value: 87.86649114275045
- type: euclidean_f1
value: 80.45644844577821
- type: euclidean_f1_threshold
value: 87.8547191619873
- type: euclidean_precision
value: 77.54052702992216
- type: euclidean_recall
value: 83.60024638127503
- type: manhattan_accuracy
value: 89.99883572010712
- type: manhattan_accuracy_threshold
value: 4206.838607788086
- type: manhattan_ap
value: 87.8600826607838
- type: manhattan_f1
value: 80.44054508120217
- type: manhattan_f1_threshold
value: 4372.755432128906
- type: manhattan_precision
value: 78.08219178082192
- type: manhattan_recall
value: 82.94579611949491
- type: max_accuracy
value: 90.03182365040556
- type: max_ap
value: 87.86649114275045
- type: max_f1
value: 80.45644844577821
task:
type: PairClassification
language:
- en
license: cc-by-nc-4.0
library_name: transformers
---
## Introduction
We present NV-Embed-v2, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard))(as of Aug 30, 2024) with a score of 72.31 across 56 text embedding tasks. It also holds the No. 1 in the retrieval sub-category (a score of 62.65 across 15 tasks) in the leaderboard, which is essential to the development of RAG technology.
NV-Embed-v2 presents several new designs, including having the LLM attend to latent vectors for better pooled embedding output, and demonstrating a two-staged instruction tuning method to enhance the accuracy of both retrieval and non-retrieval tasks. Additionally, NV-Embed-v2 incorporates a novel hard-negative mining methods that take into account the positive relevance score for better false negatives removal.
For more technical details, refer to our paper: [NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models](https://arxiv.org/pdf/2405.17428).
## Model Details
- Base Decoder-only LLM: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- Pooling Type: Latent-Attention
- Embedding Dimension: 4096
## How to use
Here is an example of how to encode queries and passages using Huggingface-transformer and Sentence-transformer. Please find the required package version [here](https://huggingface.co/nvidia/NV-Embed-v2#2-required-packages).
### Usage (HuggingFace Transformers)
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
# Each query needs to be accompanied by an corresponding instruction describing the task.
task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",}
query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: "
queries = [
'are judo throws allowed in wrestling?',
'how to become a radiology technician in michigan?'
]
# No instruction needed for retrieval passages
passage_prefix = ""
passages = [
"Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.",
"Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan."
]
# load model with tokenizer
model = AutoModel.from_pretrained('nvidia/NV-Embed-v2', trust_remote_code=True)
# get the embeddings
max_length = 32768
query_embeddings = model.encode(queries, instruction=query_prefix, max_length=max_length)
passage_embeddings = model.encode(passages, instruction=passage_prefix, max_length=max_length)
# normalize embeddings
query_embeddings = F.normalize(query_embeddings, p=2, dim=1)
passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1)
# get the embeddings with DataLoader (spliting the datasets into multiple mini-batches)
# batch_size=2
# query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=query_prefix, max_length=max_length, num_workers=32, return_numpy=True)
# passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction=passage_prefix, max_length=max_length, num_workers=32, return_numpy=True)
scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())
# [[87.42693328857422, 0.46283677220344543], [0.965264618396759, 86.03721618652344]]
```
### Usage (Sentence-Transformers)
```python
import torch
from sentence_transformers import SentenceTransformer
# Each query needs to be accompanied by an corresponding instruction describing the task.
task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",}
query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: "
queries = [
'are judo throws allowed in wrestling?',
'how to become a radiology technician in michigan?'
]
# No instruction needed for retrieval passages
passages = [
"Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.",
"Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan."
]
# load model with tokenizer
model = SentenceTransformer('nvidia/NV-Embed-v2', trust_remote_code=True)
model.max_seq_length = 32768
model.tokenizer.padding_side="right"
def add_eos(input_examples):
input_examples = [input_example + model.tokenizer.eos_token for input_example in input_examples]
return input_examples
# get the embeddings
batch_size = 2
query_embeddings = model.encode(add_eos(queries), batch_size=batch_size, prompt=query_prefix, normalize_embeddings=True)
passage_embeddings = model.encode(add_eos(passages), batch_size=batch_size, normalize_embeddings=True)
scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())
```
## License
This model should not be used for any commercial purpose. Refer the [license](https://spdx.org/licenses/CC-BY-NC-4.0) for the detailed terms.
For commercial purpose, we recommend you to use the models of [NeMo Retriever Microservices (NIMs)](https://build.nvidia.com/explore/retrieval).
## Correspondence to
Chankyu Lee (chankyul@nvidia.com), Rajarshi Roy (rajarshir@nvidia.com), Wei Ping (wping@nvidia.com)
## Citation
If you find this code useful in your research, please consider citing:
```bibtex
@article{lee2024nv,
title={NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models},
author={Lee, Chankyu and Roy, Rajarshi and Xu, Mengyao and Raiman, Jonathan and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
journal={arXiv preprint arXiv:2405.17428},
year={2024}
}
```
```bibtex
@article{moreira2024nv,
title={NV-Retriever: Improving text embedding models with effective hard-negative mining},
author={Moreira, Gabriel de Souza P and Osmulski, Radek and Xu, Mengyao and Ak, Ronay and Schifferer, Benedikt and Oldridge, Even},
journal={arXiv preprint arXiv:2407.15831},
year={2024}
}
```
## Troubleshooting
#### 1. Instruction template for MTEB benchmarks
For MTEB sub-tasks for retrieval, STS, summarization, please use the instruction prefix template in [instructions.json](https://huggingface.co/nvidia/NV-Embed-v2/blob/main/instructions.json). For classification, clustering and reranking, please use the instructions provided in Table. 7 in [NV-Embed paper](https://arxiv.org/pdf/2405.17428).
#### 2. Required Packages
If you have trouble, try installing the python packages as below
```python
pip uninstall -y transformer-engine
pip install torch==2.2.0
pip install transformers==4.42.4
pip install flash-attn==2.2.0
pip install sentence-transformers==2.7.0
```
#### 3. How to enable Multi-GPU (Note, this is the case for HuggingFace Transformers)
```python
from transformers import AutoModel
from torch.nn import DataParallel
embedding_model = AutoModel.from_pretrained("nvidia/NV-Embed-v2")
for module_key, module in embedding_model._modules.items():
embedding_model._modules[module_key] = DataParallel(module)
```
#### 4. Fixing "nvidia/NV-Embed-v2 is not the path to a directory containing a file named config.json"
Switch to your local model path,and open config.json and change the value of **"_name_or_path"** and replace it with your local model path.
#### 5. Access to model nvidia/NV-Embed-v2 is restricted. You must be authenticated to access it
Use your huggingface access [token](https://huggingface.co/settings/tokens) to execute *"huggingface-cli login"*.
#### 6. How to resolve slight mismatch in Sentence transformer results.
A slight mismatch in the Sentence Transformer implementation is caused by a discrepancy in the calculation of the instruction prefix length within the Sentence Transformer package.
To fix this issue, you need to build the Sentence Transformer package from source, making the necessary modification in this [line](https://github.com/UKPLab/sentence-transformers/blob/v2.7-release/sentence_transformers/SentenceTransformer.py#L353) as below.
```python
git clone https://github.com/UKPLab/sentence-transformers.git
cd sentence-transformers
git checkout v2.7-release
# Modify L353 in SentenceTransformer.py to **'extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1]'**.
pip install -e .
```
|
timm/visformer_small.in1k | timm | "2023-04-26T16:47:32Z" | 77,172 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.12533",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T16:47:02Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for visformer_small.in1k
A Visformer image classification model. Trained on ImageNet-1k by https://github.com/hzhang57 and https://github.com/developer0hye.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 40.2
- GMACs: 4.9
- Activations (M): 11.4
- Image size: 224 x 224
- **Papers:**
- Visformer: The Vision-friendly Transformer: https://arxiv.org/abs/2104.12533
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/danczs/Visformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('visformer_small.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'visformer_small.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{chen2021visformer,
title={Visformer: The vision-friendly transformer},
author={Chen, Zhengsu and Xie, Lingxi and Niu, Jianwei and Liu, Xuefeng and Wei, Longhui and Tian, Qi},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
pages={589--598},
year={2021}
}
```
|
Alibaba-NLP/gte-Qwen2-1.5B-instruct | Alibaba-NLP | "2024-11-12T08:49:56Z" | 77,157 | 120 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-29T08:02:40Z" | ---
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
license: apache-2.0
model-index:
- name: gte-qwen2-7B-instruct
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 50.511868162026175
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 45.007803189284004
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 43.20754608934859
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 38.818037697335505
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 39.386760057101945
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 37.89687154075537
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
split: test
type: mteb/mind_small
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: None
split: test
type: mteb/quora
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 55.82153952668092
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 282350215ef01743dc01b456c7f5241fa8937f16
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 62.094465801879295
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: None
split: test
type: mteb/scidocs
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
split: test
type: mteb/sickr-sts
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 67.65446577183913
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 46.30749237193961
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: None
split: test
type: mteb/trec-covid
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 49.581627240203474
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
task:
type: PairClassification
- dataset:
config: default
name: MTEB AFQMC
revision: b44c3b011063adb25877c13823db83bb193913c4
split: validation
type: C-MTEB/AFQMC
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
task:
type: STS
- dataset:
config: default
name: MTEB ATEC
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
split: test
type: C-MTEB/ATEC
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
task:
type: STS
- dataset:
config: zh
name: MTEB AmazonReviewsClassification (zh)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
task:
type: Classification
- dataset:
config: default
name: MTEB BQ
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
split: test
type: C-MTEB/BQ
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
task:
type: STS
- dataset:
config: default
name: MTEB CLSClusteringP2P
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
split: test
type: C-MTEB/CLSClusteringP2P
metrics:
- type: v_measure
value: 45.21317724305628
task:
type: Clustering
- dataset:
config: default
name: MTEB CLSClusteringS2S
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
split: test
type: C-MTEB/CLSClusteringS2S
metrics:
- type: v_measure
value: 42.49825170976724
task:
type: Clustering
- dataset:
config: default
name: MTEB CMedQAv1
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
split: test
type: C-MTEB/CMedQAv1-reranking
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
task:
type: Reranking
- dataset:
config: default
name: MTEB CMedQAv2
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
split: test
type: C-MTEB/CMedQAv2-reranking
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
task:
type: Reranking
- dataset:
config: default
name: MTEB CmedqaRetrieval
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
task:
type: Retrieval
- dataset:
config: default
name: MTEB Cmnli
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
split: validation
type: C-MTEB/CMNLI
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
task:
type: PairClassification
- dataset:
config: default
name: MTEB CovidRetrieval
revision: 1271c7809071a13532e05f25fb53511ffce77117
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
task:
type: Retrieval
- dataset:
config: default
name: MTEB IFlyTek
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
split: validation
type: C-MTEB/IFlyTek-classification
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
task:
type: Classification
- dataset:
config: default
name: MTEB JDReview
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
split: test
type: C-MTEB/JDReview-classification
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
task:
type: Classification
- dataset:
config: default
name: MTEB LCQMC
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
split: test
type: C-MTEB/LCQMC
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
task:
type: STS
- dataset:
config: default
name: MTEB MMarcoReranking
revision: None
split: dev
type: C-MTEB/Mmarco-reranking
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
task:
type: Reranking
- dataset:
config: default
name: MTEB MMarcoRetrieval
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
task:
type: Retrieval
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
task:
type: Classification
- dataset:
config: default
name: MTEB MedicalRetrieval
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
task:
type: Retrieval
- dataset:
config: default
name: MTEB MultilingualSentiment
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
split: validation
type: C-MTEB/MultilingualSentiment-classification
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
task:
type: Classification
- dataset:
config: default
name: MTEB Ocnli
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
split: validation
type: C-MTEB/OCNLI
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
task:
type: PairClassification
- dataset:
config: default
name: MTEB OnlineShopping
revision: e610f2ebd179a8fda30ae534c3878750a96db120
split: test
type: C-MTEB/OnlineShopping-classification
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
task:
type: Classification
- dataset:
config: default
name: MTEB PAWSX
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
split: test
type: C-MTEB/PAWSX
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
task:
type: STS
- dataset:
config: default
name: MTEB QBQTC
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
split: test
type: C-MTEB/QBQTC
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
task:
type: STS
- dataset:
config: zh
name: MTEB STS22 (zh)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
task:
type: STS
- dataset:
config: default
name: MTEB STSB
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
split: test
type: C-MTEB/STSB
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
task:
type: STS
- dataset:
config: default
name: MTEB T2Reranking
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
split: dev
type: C-MTEB/T2Reranking
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
task:
type: Reranking
- dataset:
config: default
name: MTEB T2Retrieval
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
task:
type: Retrieval
- dataset:
config: default
name: MTEB TNews
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
split: validation
type: C-MTEB/TNews-classification
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
task:
type: Classification
- dataset:
config: default
name: MTEB ThuNewsClusteringP2P
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
split: test
type: C-MTEB/ThuNewsClusteringP2P
metrics:
- type: v_measure
value: 68.23769904483508
task:
type: Clustering
- dataset:
config: default
name: MTEB ThuNewsClusteringS2S
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
split: test
type: C-MTEB/ThuNewsClusteringS2S
metrics:
- type: v_measure
value: 62.50294403136556
task:
type: Clustering
- dataset:
config: default
name: MTEB VideoRetrieval
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
task:
type: Retrieval
- dataset:
config: default
name: MTEB Waimai
revision: 339287def212450dcaa9df8c22bf93e9980c7023
split: test
type: C-MTEB/waimai-classification
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
task:
type: Classification
- dataset:
config: default
name: MTEB 8TagsClustering
revision: None
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 44.594104491193555
task:
type: Clustering
- dataset:
config: default
name: MTEB AllegroReviews
revision: None
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna-PL
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
split: test
type: clarin-knext/arguana-pl
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
task:
type: Retrieval
- dataset:
config: default
name: MTEB CBD
revision: None
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: None
split: test
type: PL-MTEB/cdsce-pairclassification
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: None
split: test
type: PL-MTEB/cdscr-sts
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
task:
type: STS
- dataset:
config: default
name: MTEB DBPedia-PL
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
split: test
type: clarin-knext/dbpedia-pl
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA-PL
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
split: test
type: clarin-knext/fiqa-pl
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA-PL
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
split: test
type: clarin-knext/hotpotqa-pl
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO-PL
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
split: test
type: clarin-knext/msmarco-pl
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
task:
type: Retrieval
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
task:
type: Classification
- dataset:
config: default
name: MTEB NFCorpus-PL
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
split: test
type: clarin-knext/nfcorpus-pl
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ-PL
revision: f171245712cf85dd4700b06bef18001578d0ca8d
split: test
type: clarin-knext/nq-pl
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
task:
type: Retrieval
- dataset:
config: default
name: MTEB PAC
revision: None
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
task:
type: Classification
- dataset:
config: default
name: MTEB PPC
revision: None
split: test
type: PL-MTEB/ppc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
task:
type: PairClassification
- dataset:
config: default
name: MTEB PSC
revision: None
split: test
type: PL-MTEB/psc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
task:
type: PairClassification
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: None
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: None
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
task:
type: Classification
- dataset:
config: default
name: MTEB Quora-PL
revision: 0be27e93455051e531182b85e85e425aba12e9d4
split: test
type: clarin-knext/quora-pl
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS-PL
revision: 45452b03f05560207ef19149545f168e596c9337
split: test
type: clarin-knext/scidocs-pl
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-E-PL
revision: None
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: None
split: test
type: PL-MTEB/sickr-pl-sts
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
task:
type: STS
- dataset:
config: default
name: MTEB SciFact-PL
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
split: test
type: clarin-knext/scifact-pl
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID-PL
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
split: test
type: clarin-knext/trec-covid-pl
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
task:
type: Retrieval
- dataset:
config: default
name: MTEB AlloProfClusteringP2P
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 70.55290063940157
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloProfClusteringS2S
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 55.41500719337263
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloprofReranking
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
split: test
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
task:
type: Reranking
- dataset:
config: default
name: MTEB AlloprofRetrieval
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
task:
type: Retrieval
- dataset:
config: fr
name: MTEB AmazonReviewsClassification (fr)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
task:
type: Classification
- dataset:
config: default
name: MTEB BSARDRetrieval
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
split: test
type: maastrichtlawtech/bsard
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
task:
type: Retrieval
- dataset:
config: default
name: MTEB HALClusteringS2S
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
split: test
type: lyon-nlp/clustering-hal-s2s
metrics:
- type: v_measure
value: 28.301882091023288
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringP2P
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 45.26992995191701
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringS2S
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 42.773174876871145
task:
type: Clustering
- dataset:
config: fr
name: MTEB MTOPDomainClassification (fr)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
task:
type: Classification
- dataset:
config: fr
name: MTEB MTOPIntentClassification (fr)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClassification (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringP2P (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.04138999801822
task:
type: Clustering
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringS2S (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.7056263158008
task:
type: Clustering
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
task:
type: Classification
- dataset:
config: fr
name: MTEB MintakaRetrieval (fr)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: jinaai/mintakaqa
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
task:
type: Retrieval
- dataset:
config: fr
name: MTEB OpusparcusPC (fr)
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
split: test
type: GEM/opusparcus
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
task:
type: PairClassification
- dataset:
config: fr
name: MTEB PawsX (fr)
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
split: test
type: paws-x
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICKFr
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
split: test
type: Lajavaness/SICK-fr
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
task:
type: STS
- dataset:
config: fr
name: MTEB STSBenchmarkMultilingualSTS (fr)
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
split: test
type: stsb_multi_mt
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
task:
type: STS
- dataset:
config: default
name: MTEB SummEvalFr
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
split: test
type: lyon-nlp/summarization-summeval-fr-p2p
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
task:
type: Summarization
- dataset:
config: default
name: MTEB SyntecReranking
revision: b205c5084a0934ce8af14338bf03feb19499c84d
split: test
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
task:
type: Reranking
- dataset:
config: default
name: MTEB SyntecRetrieval
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
split: test
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
task:
type: Retrieval
- dataset:
config: fr
name: MTEB XPQARetrieval (fr)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
task:
type: Retrieval
---
## gte-Qwen2-1.5B-instruct
**gte-Qwen2-1.5B-instruct** is the latest model in the gte (General Text Embedding) model family. The model is built on [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) LLM model and use the same training data and strategies as the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 1.5B
- Embedding Dimension: 1536
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
### infinity_emb
Usage via [infinity, MIT Licensed](https://github.com/michaelfeil/infinity).
```bash
docker run \
--gpus "0" -p "7997":"7997" \
michaelf34/infinity:0.0.68-trt-onnx \
v2 --model-id Alibaba-NLP/gte-Qwen2-1.5B-instruct --revision "refs/pr/20" --dtype bfloat16 --batch-size 16 --device cuda --engine torch --port 7997 --no-bettertransformer
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-1.5B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| [**gte-Qwen2-1.5B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | **67.16** | **67.65** | **66.60** | **64.04** |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
|
timm/poolformer_m36.sail_in1k | timm | "2023-05-05T06:15:32Z" | 77,047 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T06:14:32Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for poolformer_m36.sail_in1k
A PoolFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 56.2
- GMACs: 8.8
- Activations (M): 22.0
- Image size: 224 x 224
- **Papers:**
- MetaFormer Is Actually What You Need for Vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/poolformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('poolformer_m36.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformer_m36.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformer_m36.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2022metaformer,
title={Metaformer is actually what you need for vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10819--10829},
year={2022}
}
```
|
r1char9/rubert-base-cased-russian-sentiment | r1char9 | "2024-02-16T04:29:08Z" | 77,003 | 8 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment-analysis",
"multi-class-classification",
"sentiment analysis",
"rubert",
"sentiment",
"russian",
"multiclass",
"classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-16T04:10:14Z" | ---
license: mit
language:
- ru
metrics:
- f1
- roc_auc
- precision
- recall
pipeline_tag: text-classification
tags:
- sentiment-analysis
- multi-class-classification
- sentiment analysis
- rubert
- sentiment
- bert
- russian
- multiclass
- classification
---
Модель [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) которая был fine-tuned на задачу __sentiment classification__ для коротких __Russian__ текстов.
Задача представляет собой __multi-class classification__ со следующими метками:
```yaml
0: neutral
1: positive
2: negative
```
## Usage
```python
from transformers import pipeline
model = pipeline(model="r1char9/rubert-base-cased-russian-sentiment")
model("Привет, ты мне нравишься!")
# [{'label': 'positive', 'score': 0.8220236897468567}]
```
## Dataset
Модель была натренирована на данных:
- Kaggle Russian News Dataset
- Linis Crowd 2015
- Linis Crowd 2016
- RuReviews
- RuSentiment
```yaml
tokenizer.max_length: 256
batch_size: 32
optimizer: adam
lr: 0.00001
weight_decay: 0
epochs: 2
``` |
timm/rexnet_100.nav_in1k | timm | "2024-02-10T23:32:12Z" | 76,957 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2007.00992",
"license:mit",
"region:us"
] | image-classification | "2023-03-20T20:35:20Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for rexnet_100.nav_in1k
A ReXNet image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.8
- GMACs: 0.4
- Activations (M): 7.4
- Image size: 224 x 224
- **Papers:**
- Rethinking Channel Dimensions for Efficient Model Design: https://arxiv.org/abs/2007.00992
- **Original:** https://github.com/clovaai/rexnet
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('rexnet_100.nav_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_100.nav_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 38, 56, 56])
# torch.Size([1, 61, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 185, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_100.nav_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results)."
|model |top1 |top5 |param_count|img_size|crop_pct|
|-------------------------|------|------|-----------|--------|--------|
|rexnetr_300.sw_in12k_ft_in1k|84.53 |97.252|34.81 |288 |1.0 |
|rexnetr_200.sw_in12k_ft_in1k|83.164|96.648|16.52 |288 |1.0 |
|rexnet_300.nav_in1k |82.772|96.232|34.71 |224 |0.875 |
|rexnet_200.nav_in1k |81.652|95.668|16.37 |224 |0.875 |
|rexnet_150.nav_in1k |80.308|95.174|9.73 |224 |0.875 |
|rexnet_130.nav_in1k |79.478|94.68 |7.56 |224 |0.875 |
|rexnet_100.nav_in1k |77.832|93.886|4.8 |224 |0.875 |
## Citation
```bibtex
@misc{han2021rethinking,
title={Rethinking Channel Dimensions for Efficient Model Design},
author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
year={2021},
eprint={2007.00992},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Helsinki-NLP/opus-mt-vi-en | Helsinki-NLP | "2023-08-16T12:08:32Z" | 76,879 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- vi
- en
tags:
- translation
license: apache-2.0
---
### vie-eng
* source group: Vietnamese
* target group: English
* OPUS readme: [vie-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md)
* model: transformer-align
* source language(s): vie vie_Hani
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.eng | 42.8 | 0.608 |
### System Info:
- hf_name: vie-eng
- source_languages: vie
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'en']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: eng
- short_pair: vi-en
- chrF2_score: 0.608
- bleu: 42.8
- brevity_penalty: 0.955
- ref_len: 20241.0
- src_name: Vietnamese
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: en
- prefer_old: False
- long_pair: vie-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
facebook/dino-vits16 | facebook | "2023-05-22T07:05:10Z" | 76,804 | 14 | transformers | [
"transformers",
"pytorch",
"vit",
"image-feature-extraction",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.14294",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (small-sized model, patch size 16) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('facebook/dino-vits16')
model = ViTModel.from_pretrained('facebook/dino-vits16')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
timm/crossvit_9_240.in1k | timm | "2023-04-24T00:30:01Z" | 76,602 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.14899",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-24T00:29:50Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for crossvit_9_240.in1k
A CrossViT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.6
- GMACs: 1.8
- Activations (M): 9.5
- Image size: 240 x 240
- **Papers:**
- CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification: https://arxiv.org/abs/2103.14899
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/IBM/CrossViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('crossvit_9_240.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'crossvit_9_240.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (torch.Size([1, 401, 128]), torch.Size([1, 197, 256])) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{
chen2021crossvit,
title={{CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification}},
author={Chun-Fu (Richard) Chen and Quanfu Fan and Rameswar Panda},
booktitle={International Conference on Computer Vision (ICCV)},
year={2021}
}
```
|
timm/fbnetv3_b.ra2_in1k | timm | "2023-04-27T22:48:34Z" | 76,456 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2006.02049",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-16T05:36:34Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for fbnetv3_b.ra2_in1k
A FBNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.6
- GMACs: 0.4
- Activations (M): 7.0
- Image size: train = 224 x 224, test = 256 x 256
- **Papers:**
- FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining: https://arxiv.org/abs/2006.02049
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('fbnetv3_b.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetv3_b.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 120, 14, 14])
# torch.Size([1, 1344, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'fbnetv3_b.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1344, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{dai2021fbnetv3,
title={Fbnetv3: Joint architecture-recipe search using predictor pretraining},
author={Dai, Xiaoliang and Wan, Alvin and Zhang, Peizhao and Wu, Bichen and He, Zijian and Wei, Zhen and Chen, Kan and Tian, Yuandong and Yu, Matthew and Vajda, Peter and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16276--16285},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
sentence-transformers/gtr-t5-large | sentence-transformers | "2024-03-27T10:41:50Z" | 76,355 | 35 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"t5",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:2112.07899",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
pipeline_tag: sentence-similarity
---
# sentence-transformers/gtr-t5-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model [gtr-large-1](https://tfhub.dev/google/gtr/gtr-large/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-large model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/gtr-t5-large')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-large)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
|
timm/resmlp_12_224.fb_in1k | timm | "2024-02-10T23:36:32Z" | 76,245 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2105.03404",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-27T23:12:04Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for resmlp_12_224.fb_in1k
A ResMLP image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.4
- GMACs: 3.0
- Activations (M): 5.5
- Image size: 224 x 224
- **Papers:**
- ResMLP: Feedforward networks for image classification with data-efficient training: https://arxiv.org/abs/2105.03404
- **Original:** https://github.com/facebookresearch/deit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resmlp_12_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resmlp_12_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{touvron2021resmlp,
title={ResMLP: Feedforward networks for image classification with data-efficient training},
author={Hugo Touvron and Piotr Bojanowski and Mathilde Caron and Matthieu Cord and Alaaeldin El-Nouby and Edouard Grave and Gautier Izacard and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2105.03404},
year={2021},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
flair/chunk-english-fast | flair | "2023-04-05T11:50:33Z" | 76,180 | 5 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2000",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2000
widget:
- text: "The happy man has been eating at the diner"
---
## English Chunking in Flair (fast model)
This is the fast phrase chunking model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **96,22** (CoNLL-2000)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| ADJP | adjectival |
| ADVP | adverbial |
| CONJP | conjunction |
| INTJ | interjection |
| LST | list marker |
| NP | noun phrase |
| PP | prepositional |
| PRT | particle |
| SBAR | subordinate clause |
| VP | verb phrase |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/chunk-english-fast")
# make example sentence
sentence = Sentence("The happy man has been eating at the diner")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('np'):
print(entity)
```
This yields the following output:
```
Span [1,2,3]: "The happy man" [− Labels: NP (0.9958)]
Span [4,5,6]: "has been eating" [− Labels: VP (0.8759)]
Span [7]: "at" [− Labels: PP (1.0)]
Span [8,9]: "the diner" [− Labels: NP (0.9991)]
```
So, the spans "*The happy man*" and "*the diner*" are labeled as **noun phrases** (NP) and "*has been eating*" is labeled as a **verb phrase** (VP) in the sentence "*The happy man has been eating at the diner*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_2000
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_2000()
# 2. what tag do we want to predict?
tag_type = 'np'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('news-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/chunk-english-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
timm/res2net50_14w_8s.in1k | timm | "2023-04-24T00:04:42Z" | 76,166 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
] | image-classification | "2023-04-24T00:04:16Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2net50_14w_8s.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.1
- GMACs: 4.2
- Activations (M): 13.3
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2net50_14w_8s.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_14w_8s.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_14w_8s.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
shahrukhx01/question-vs-statement-classifier | shahrukhx01 | "2023-03-29T22:01:12Z" | 76,144 | 40 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"neural-search-query-classification",
"neural-search",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language: "en"
tags:
- neural-search-query-classification
- neural-search
widget:
- text: "what did you eat in lunch?"
---
# KEYWORD STATEMENT VS QUESTION CLASSIFIER FOR NEURAL SEARCH
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/question-vs-statement-classifier")
model = AutoModelForSequenceClassification.from_pretrained("shahrukhx01/question-vs-statement-classifier")
```
Trained to add the feature for classifying queries between Question Query vs Statement Query using classification in [Haystack](https://github.com/deepset-ai/haystack/issues/611)
|
flair/ner-english-ontonotes-large | flair | "2021-05-08T15:35:21Z" | 76,107 | 92 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"arxiv:2011.06993",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
## English NER in Flair (Ontonotes large model)
This is the large 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **90.93** (Ontonotes)
Predicts 18 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| CARDINAL | cardinal value |
| DATE | date value |
| EVENT | event name |
| FAC | building name |
| GPE | geo-political entity |
| LANGUAGE | language name |
| LAW | law name |
| LOC | location name |
| MONEY | money name |
| NORP | affiliation |
| ORDINAL | ordinal value |
| ORG | organization name |
| PERCENT | percent value |
| PERSON | person name |
| PRODUCT | product name |
| QUANTITY | quantity value |
| TIME | time value |
| WORK_OF_ART | name of work of art |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english-ontonotes-large")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [2,3]: "September 1st" [− Labels: DATE (1.0)]
Span [4]: "George" [− Labels: PERSON (1.0)]
Span [6,7]: "1 dollar" [− Labels: MONEY (1.0)]
Span [10,11,12]: "Game of Thrones" [− Labels: WORK_OF_ART (1.0)]
```
So, the entities "*September 1st*" (labeled as a **date**), "*George*" (labeled as a **person**), "*1 dollar*" (labeled as a **money**) and "Game of Thrones" (labeled as a **work of art**) are found in the sentence "*On September 1st George Washington won 1 dollar while watching Game of Thrones*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus: Corpus = ColumnCorpus(
"resources/tasks/onto-ner",
column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"},
tag_to_bioes="ner",
)
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-english-ontonotes-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
nvidia/bigvgan_v2_44khz_128band_512x | nvidia | "2024-09-05T03:35:39Z" | 75,940 | 16 | PyTorch | [
"PyTorch",
"neural-vocoder",
"audio-generation",
"audio-to-audio",
"arxiv:2206.04658",
"license:mit",
"region:us"
] | audio-to-audio | "2024-07-15T14:10:28Z" | ---
license: mit
license_link: https://huggingface.co/nvidia/BigVGAN/blob/main/LICENSE
tags:
- neural-vocoder
- audio-generation
library_name: PyTorch
pipeline_tag: audio-to-audio
---
## BigVGAN: A Universal Neural Vocoder with Large-Scale Training
#### Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, Sungroh Yoon
[[Paper]](https://arxiv.org/abs/2206.04658) - [[Code]](https://github.com/NVIDIA/BigVGAN) - [[Showcase]](https://bigvgan-demo.github.io/) - [[Project Page]](https://research.nvidia.com/labs/adlr/projects/bigvgan/) - [[Weights]](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a) - [[Demo]](https://huggingface.co/spaces/nvidia/BigVGAN)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bigvgan-a-universal-neural-vocoder-with-large/speech-synthesis-on-libritts)](https://paperswithcode.com/sota/speech-synthesis-on-libritts?p=bigvgan-a-universal-neural-vocoder-with-large)
<center><img src="https://user-images.githubusercontent.com/15963413/218609148-881e39df-33af-4af9-ab95-1427c4ebf062.png" width="800"></center>
## News
- **Jul 2024 (v2.3):**
- General refactor and code improvements for improved readability.
- Fully fused CUDA kernel of anti-alised activation (upsampling + activation + downsampling) with inference speed benchmark.
- **Jul 2024 (v2.2):** The repository now includes an interactive local demo using gradio.
- **Jul 2024 (v2.1):** BigVGAN is now integrated with 🤗 Hugging Face Hub with easy access to inference using pretrained checkpoints. We also provide an interactive demo on Hugging Face Spaces.
- **Jul 2024 (v2):** We release BigVGAN-v2 along with pretrained checkpoints. Below are the highlights:
- Custom CUDA kernel for inference: we provide a fused upsampling + activation kernel written in CUDA for accelerated inference speed. Our test shows 1.5 - 3x faster speed on a single A100 GPU.
- Improved discriminator and loss: BigVGAN-v2 is trained using a multi-scale sub-band CQT discriminator and a multi-scale mel spectrogram loss.
- Larger training data: BigVGAN-v2 is trained using datasets containing diverse audio types, including speech in multiple languages, environmental sounds, and instruments.
- We provide pretrained checkpoints of BigVGAN-v2 using diverse audio configurations, supporting up to 44 kHz sampling rate and 512x upsampling ratio.
## Installation
This repository contains pretrained BigVGAN checkpoints with easy access to inference and additional `huggingface_hub` support.
If you are interested in training the model and additional functionalities, please visit the official GitHub repository for more information: https://github.com/NVIDIA/BigVGAN
```shell
git lfs install
git clone https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x
```
## Usage
Below example describes how you can use BigVGAN: load the pretrained BigVGAN generator from Hugging Face Hub, compute mel spectrogram from input waveform, and generate synthesized waveform using the mel spectrogram as the model's input.
```python
device = 'cuda'
import torch
import bigvgan
import librosa
from meldataset import get_mel_spectrogram
# instantiate the model. You can optionally set use_cuda_kernel=True for faster inference.
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=False)
# remove weight norm in the model and set to eval mode
model.remove_weight_norm()
model = model.eval().to(device)
# load wav file and compute mel spectrogram
wav_path = '/path/to/your/audio.wav'
wav, sr = librosa.load(wav_path, sr=model.h.sampling_rate, mono=True) # wav is np.ndarray with shape [T_time] and values in [-1, 1]
wav = torch.FloatTensor(wav).unsqueeze(0) # wav is FloatTensor with shape [B(1), T_time]
# compute mel spectrogram from the ground truth audio
mel = get_mel_spectrogram(wav, model.h).to(device) # mel is FloatTensor with shape [B(1), C_mel, T_frame]
# generate waveform from mel
with torch.inference_mode():
wav_gen = model(mel) # wav_gen is FloatTensor with shape [B(1), 1, T_time] and values in [-1, 1]
wav_gen_float = wav_gen.squeeze(0).cpu() # wav_gen is FloatTensor with shape [1, T_time]
# you can convert the generated waveform to 16 bit linear PCM
wav_gen_int16 = (wav_gen_float * 32767.0).numpy().astype('int16') # wav_gen is now np.ndarray with shape [1, T_time] and int16 dtype
```
## Using Custom CUDA Kernel for Synthesis
You can apply the fast CUDA inference kernel by using a parameter `use_cuda_kernel` when instantiating BigVGAN:
```python
import bigvgan
model = bigvgan.BigVGAN.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', use_cuda_kernel=True)
```
When applied for the first time, it builds the kernel using `nvcc` and `ninja`. If the build succeeds, the kernel is saved to `alias_free_activation/cuda/build` and the model automatically loads the kernel. The codebase has been tested using CUDA `12.1`.
Please make sure that both are installed in your system and `nvcc` installed in your system matches the version your PyTorch build is using.
For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis
## Pretrained Models
We provide the [pretrained models on Hugging Face Collections](https://huggingface.co/collections/nvidia/bigvgan-66959df3d97fd7d98d97dc9a).
One can download the checkpoints of the generator weight (named `bigvgan_generator.pt`) and its discriminator/optimizer states (named `bigvgan_discriminator_optimizer.pt`) within the listed model repositories.
| Model Name | Sampling Rate | Mel band | fmax | Upsampling Ratio | Params | Dataset | Steps | Fine-Tuned |
|:--------------------------------------------------------------------------------------------------------:|:-------------:|:--------:|:-----:|:----------------:|:------:|:--------------------------:|:-----:|:----------:|
| [bigvgan_v2_44khz_128band_512x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_512x) | 44 kHz | 128 | 22050 | 512 | 122M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_44khz_128band_256x](https://huggingface.co/nvidia/bigvgan_v2_44khz_128band_256x) | 44 kHz | 128 | 22050 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_24khz_100band_256x](https://huggingface.co/nvidia/bigvgan_v2_24khz_100band_256x) | 24 kHz | 100 | 12000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_256x) | 22 kHz | 80 | 11025 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_v2_22khz_80band_fmax8k_256x](https://huggingface.co/nvidia/bigvgan_v2_22khz_80band_fmax8k_256x) | 22 kHz | 80 | 8000 | 256 | 112M | Large-scale Compilation | 5M | No |
| [bigvgan_24khz_100band](https://huggingface.co/nvidia/bigvgan_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 112M | LibriTTS | 5M | No |
| [bigvgan_base_24khz_100band](https://huggingface.co/nvidia/bigvgan_base_24khz_100band) | 24 kHz | 100 | 12000 | 256 | 14M | LibriTTS | 5M | No |
| [bigvgan_22khz_80band](https://huggingface.co/nvidia/bigvgan_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 112M | LibriTTS + VCTK + LJSpeech | 5M | No |
| [bigvgan_base_22khz_80band](https://huggingface.co/nvidia/bigvgan_base_22khz_80band) | 22 kHz | 80 | 8000 | 256 | 14M | LibriTTS + VCTK + LJSpeech | 5M | No | |
timm/tinynet_a.in1k | timm | "2023-04-27T21:50:19Z" | 75,913 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2010.14819",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:21:58Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tinynet_a.in1k
A TinyNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.2
- GMACs: 0.3
- Activations (M): 5.4
- Image size: 192 x 192
- **Papers:**
- Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tinynet_a.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_a.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 96, 96])
# torch.Size([1, 24, 48, 48])
# torch.Size([1, 40, 24, 24])
# torch.Size([1, 112, 12, 12])
# torch.Size([1, 320, 6, 6])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_a.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 6, 6) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{han2020model,
title={Model rubik’s cube: Twisting resolution, depth and width for tinynets},
author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
journal={Advances in Neural Information Processing Systems},
volume={33},
pages={19353--19364},
year={2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
UFNLP/gatortron-base | UFNLP | "2024-03-19T00:23:59Z" | 75,876 | 42 | transformers | [
"transformers",
"pytorch",
"megatron-bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2023-06-02T23:27:12Z" | ---
license: apache-2.0
---
<h2>GatorTron-Base overview </h2>
Developed by a joint effort between the University of Florida and NVIDIA, GatorTron-Base is a clinical language model of 345 million parameters, pre-trained using a BERT architecure implemented in the Megatron package (https://github.com/NVIDIA/Megatron-LM).
GatorTron-Base is pre-trained using a dataset consisting of:
- 82B words of de-identified clinical notes from the University of Florida Health System,
- 6.1B words from PubMed CC0,
- 2.5B words from WikiText,
- 0.5B words of de-identified clinical notes from MIMIC-III
The Github for GatorTron is at : https://github.com/uf-hobi-informatics-lab/GatorTron
This model is converted to Hugginface from : https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og
<h2>Model variations</h2>
Model | Parameter
--- | ---
[gatortron-base (this model)](https://huggingface.co/UFNLP/gatortron-base)| 345 million
[gatortronS](https://huggingface.co/UFNLP/gatortronS) | 345 million
[gatortron-medium](https://huggingface.co/UFNLP/gatortron-medium) | 3.9 billion
[gatortron-large](https://huggingface.co/UFNLP/gatortron-large) | 8.9 billion
<h2>How to use</h2>
```python
from transformers import AutoModel, AutoTokenizer, AutoConfig
tokenizer= AutoTokenizer.from_pretrained('UFNLP/gatortron-base')
config=AutoConfig.from_pretrained('UFNLP/gatortron-base')
mymodel=AutoModel.from_pretrained('UFNLP/gatortron-base')
encoded_input=tokenizer("Bone scan: Negative for distant metastasis.", return_tensors="pt")
encoded_output = mymodel(**encoded_input)
print (encoded_output)
```
- An NLP pacakge using GatorTron for clinical concept extraction (Named Entity Recognition): https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER
- An NLP pacakge using GatorTron for Relation Extraction: https://github.com/uf-hobi-informatics-lab/ClinicalTransformerRelationExtraction
- An NLP pacakge using GatorTron for extraction of social determinants of health (SDoH) from clinical narratives: https://github.com/uf-hobi-informatics-lab/SDoH_SODA
<h2>De-identification</h2>
We applied a de-identification system to remove protected health information (PHI) from clinical text. We adopted the safe-harbor method to identify 18 PHI categories defined in the Health Insurance Portability and Accountability Act (HIPAA) and replaced them with dummy strings (e.g., replace people’s names into [\*\*NAME\*\*]).
The de-identifiation system is described in:
Yang X, Lyu T, Li Q, Lee C-Y, Bian J, Hogan WR, Wu Y†. A study of deep learning methods for de-identification of clinical notes in cross-institute settings. BMC Med Inform Decis Mak. 2020 Dec 5;19(5):232. https://www.ncbi.nlm.nih.gov/pubmed/31801524.
<h2>Citation info</h2>
Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, Compas C, Martin C, Costa AB, Flores MG, Zhang Y, Magoc T, Harle CA, Lipori G, Mitchell DA, Hogan WR, Shenkman EA, Bian J, Wu Y†. A large language model for electronic health records. Npj Digit Med. Nature Publishing Group; . 2022 Dec 26;5(1):1–9. https://www.nature.com/articles/s41746-022-00742-2
- BibTeX entry
```
@article{yang2022large,
title={A large language model for electronic health records},
author={Yang, Xi and Chen, Aokun and PourNejatian, Nima and Shin, Hoo Chang and Smith, Kaleb E and Parisien, Christopher and Compas, Colin and Martin, Cheryl and Costa, Anthony B and Flores, Mona G and Zhang, Ying and Magoc, Tanja and Harle, Christopher A and Lipori, Gloria and Mitchell, Duane A and Hogan, William R and Shenkman, Elizabeth A and Bian, Jiang and Wu, Yonghui },
journal={npj Digital Medicine},
volume={5},
number={1},
pages={194},
year={2022},
publisher={Nature Publishing Group UK London}
}
```
<h2>Contact</h2>
- Yonghui Wu: yonghui.wu@ufl.edu
- Cheng Peng: c.peng@ufl.edu |
timm/levit_128.fb_dist_in1k | timm | "2024-02-10T23:30:34Z" | 75,850 | 1 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-02-03T21:13:15Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for levit_128.fb_dist_in1k
A LeViT image classification model using convolutional mode (using nn.Conv2d and nn.BatchNorm2d). Pretrained on ImageNet-1k using distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 9.2
- GMACs: 0.4
- Activations (M): 2.7
- Image size: 224 x 224
- **Papers:**
- LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136
- **Original:** https://github.com/facebookresearch/LeViT
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('levit_128.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'levit_128.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 |
|levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 |
|levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 |
|levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 |
|levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 |
|levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 |
## Citation
```bibtex
@InProceedings{Graham_2021_ICCV,
author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs},
title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {12259-12269}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
timm/eca_botnext26ts_256.c1_in1k | timm | "2023-04-26T16:09:31Z" | 75,782 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2101.11605",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T16:09:12Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for eca_botnext26ts_256.c1_in1k
A BotNet image classification model (with Efficient channel attention, based on ResNeXt architecture). Trained on ImageNet-1k in `timm` by Ross Wightman.
NOTE: this model did not adhere to any specific paper configuration, it was tuned for reasonable training times and reduced frequency of self-attention blocks.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping).
* Cosine LR schedule with warmup
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOB (with BYOANet attention specific blocks) allows configuration of:
* block / stage layout
* block-type interleaving
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 2.5
- Activations (M): 11.6
- Image size: 256 x 256
- **Papers:**
- Bottleneck Transformers for Visual Recognition: https://arxiv.org/abs/2101.11605
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eca_botnext26ts_256.c1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_botnext26ts_256.c1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_botnext26ts_256.c1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Srinivas2021BottleneckTF,
title={Bottleneck Transformers for Visual Recognition},
author={A. Srinivas and Tsung-Yi Lin and Niki Parmar and Jonathon Shlens and P. Abbeel and Ashish Vaswani},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={16514-16524}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
timm/gmixer_24_224.ra3_in1k | timm | "2024-02-10T23:36:15Z" | 75,755 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-27T23:00:47Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for gmixer_24_224.ra3_in1k
A G-Mixer image classification model. Trained on ImageNet-1k in `timm` by Ross Wightman. This is a custom `timm` model variant based on MLP-Mixer but using SwiGLU.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 24.7
- GMACs: 5.3
- Activations (M): 14.5
- Image size: 224 x 224
- **Papers:**
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('gmixer_24_224.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gmixer_24_224.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
sergeyzh/rubert-tiny-turbo | sergeyzh | "2024-07-31T19:39:14Z" | 75,696 | 17 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"russian",
"pretraining",
"embeddings",
"tiny",
"sentence-similarity",
"transformers",
"mteb",
"ru",
"dataset:IlyaGusev/gazeta",
"dataset:zloelias/lenta-ru",
"base_model:cointegrated/rubert-tiny2",
"base_model:finetune:cointegrated/rubert-tiny2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-21T12:13:23Z" | ---
language:
- ru
pipeline_tag: sentence-similarity
tags:
- russian
- pretraining
- embeddings
- tiny
- feature-extraction
- sentence-similarity
- sentence-transformers
- transformers
- mteb
datasets:
- IlyaGusev/gazeta
- zloelias/lenta-ru
license: mit
base_model: cointegrated/rubert-tiny2
model-index:
- name: sergeyzh/rubert-tiny-turbo
results:
- dataset:
config: default
name: MTEB AILACasedocs (default)
revision: 4106e6bcc72e0698d714ea8b101355e3e238431a
split: test
type: mteb/AILA_casedocs
metrics:
- type: main_score
value: 7.432999999999999
- type: map_at_1
value: 0.604
- type: map_at_10
value: 3.8989999999999996
- type: map_at_100
value: 7.89
- type: map_at_1000
value: 8.417
- type: map_at_20
value: 5.007000000000001
- type: map_at_3
value: 2.688
- type: map_at_5
value: 3.0380000000000003
- type: mrr_at_1
value: 6.0
- type: mrr_at_10
value: 11.799999999999999
- type: mrr_at_100
value: 14.417998426795965
- type: mrr_at_1000
value: 14.474056627618499
- type: mrr_at_20
value: 13.017532467532467
- type: mrr_at_3
value: 10.333333333333334
- type: mrr_at_5
value: 10.733333333333333
- type: nauc_map_at_1000_diff1
value: -18.649405381116548
- type: nauc_map_at_1000_max
value: 53.92467833877199
- type: nauc_map_at_1000_std
value: -37.567628121407296
- type: nauc_map_at_100_diff1
value: -19.053926237591206
- type: nauc_map_at_100_max
value: 53.442907236002725
- type: nauc_map_at_100_std
value: -37.310817568902884
- type: nauc_map_at_10_diff1
value: -13.464050841785403
- type: nauc_map_at_10_max
value: 48.093886298979946
- type: nauc_map_at_10_std
value: -34.85388157835729
- type: nauc_map_at_1_diff1
value: -13.741863044507388
- type: nauc_map_at_1_max
value: 88.80266056441289
- type: nauc_map_at_1_std
value: -52.44805080502242
- type: nauc_map_at_20_diff1
value: -14.561491138058782
- type: nauc_map_at_20_max
value: 48.97477701904
- type: nauc_map_at_20_std
value: -31.218577996781537
- type: nauc_map_at_3_diff1
value: -15.370170931276068
- type: nauc_map_at_3_max
value: 53.443631887225486
- type: nauc_map_at_3_std
value: -40.92344513873499
- type: nauc_map_at_5_diff1
value: -12.899827975508286
- type: nauc_map_at_5_max
value: 56.55724779187716
- type: nauc_map_at_5_std
value: -38.50107328981899
- type: nauc_mrr_at_1000_diff1
value: -20.480388426956775
- type: nauc_mrr_at_1000_max
value: 59.34434186773745
- type: nauc_mrr_at_1000_std
value: -38.78219708358511
- type: nauc_mrr_at_100_diff1
value: -20.733217227513638
- type: nauc_mrr_at_100_max
value: 59.338571965753026
- type: nauc_mrr_at_100_std
value: -38.905241386083524
- type: nauc_mrr_at_10_diff1
value: -23.191503817950903
- type: nauc_mrr_at_10_max
value: 59.40585262343663
- type: nauc_mrr_at_10_std
value: -39.558082853802894
- type: nauc_mrr_at_1_diff1
value: -18.978624452195685
- type: nauc_mrr_at_1_max
value: 88.73088274751811
- type: nauc_mrr_at_1_std
value: -52.46400143099903
- type: nauc_mrr_at_20_diff1
value: -20.110327257289537
- type: nauc_mrr_at_20_max
value: 57.24590011894607
- type: nauc_mrr_at_20_std
value: -36.76057923211494
- type: nauc_mrr_at_3_diff1
value: -20.292924276357084
- type: nauc_mrr_at_3_max
value: 62.92624417852826
- type: nauc_mrr_at_3_std
value: -42.31284612573441
- type: nauc_mrr_at_5_diff1
value: -22.088780368608298
- type: nauc_mrr_at_5_max
value: 61.62928734634482
- type: nauc_mrr_at_5_std
value: -38.47155384792127
- type: nauc_ndcg_at_1000_diff1
value: -21.96644342707332
- type: nauc_ndcg_at_1000_max
value: 54.04115629470727
- type: nauc_ndcg_at_1000_std
value: -38.60954619686922
- type: nauc_ndcg_at_100_diff1
value: -28.508933576201116
- type: nauc_ndcg_at_100_max
value: 53.62925134001747
- type: nauc_ndcg_at_100_std
value: -41.66742945815351
- type: nauc_ndcg_at_10_diff1
value: -19.22314681419278
- type: nauc_ndcg_at_10_max
value: 44.88305374351992
- type: nauc_ndcg_at_10_std
value: -32.86086137849654
- type: nauc_ndcg_at_1_diff1
value: -18.978624452195685
- type: nauc_ndcg_at_1_max
value: 88.73088274751811
- type: nauc_ndcg_at_1_std
value: -52.46400143099903
- type: nauc_ndcg_at_20_diff1
value: -14.037813797353552
- type: nauc_ndcg_at_20_max
value: 43.01748289241327
- type: nauc_ndcg_at_20_std
value: -23.548077008049674
- type: nauc_ndcg_at_3_diff1
value: -19.9659903984576
- type: nauc_ndcg_at_3_max
value: 64.99817864354436
- type: nauc_ndcg_at_3_std
value: -45.246163550721796
- type: nauc_ndcg_at_5_diff1
value: -20.389688306447788
- type: nauc_ndcg_at_5_max
value: 61.370293646369454
- type: nauc_ndcg_at_5_std
value: -39.9134710853091
- type: nauc_precision_at_1000_diff1
value: -26.69952361901621
- type: nauc_precision_at_1000_max
value: 46.40932456102013
- type: nauc_precision_at_1000_std
value: -37.38094677778857
- type: nauc_precision_at_100_diff1
value: -29.692268260058146
- type: nauc_precision_at_100_max
value: 49.265913223173584
- type: nauc_precision_at_100_std
value: -41.45888232985447
- type: nauc_precision_at_10_diff1
value: -20.974428245377048
- type: nauc_precision_at_10_max
value: 53.924262890679564
- type: nauc_precision_at_10_std
value: -35.74456192649867
- type: nauc_precision_at_1_diff1
value: -18.978624452195685
- type: nauc_precision_at_1_max
value: 88.73088274751811
- type: nauc_precision_at_1_std
value: -52.46400143099903
- type: nauc_precision_at_20_diff1
value: -23.03848763224966
- type: nauc_precision_at_20_max
value: 51.19001778609016
- type: nauc_precision_at_20_std
value: -33.25265416139501
- type: nauc_precision_at_3_diff1
value: -19.497362250879267
- type: nauc_precision_at_3_max
value: 64.71277842907384
- type: nauc_precision_at_3_std
value: -44.512016412661204
- type: nauc_precision_at_5_diff1
value: -18.918918918918912
- type: nauc_precision_at_5_max
value: 64.89456489456494
- type: nauc_precision_at_5_std
value: -37.37960880818024
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: -44.51937508102329
- type: nauc_recall_at_100_max
value: 25.75429602376942
- type: nauc_recall_at_100_std
value: -33.30783195688129
- type: nauc_recall_at_10_diff1
value: -18.776401920240275
- type: nauc_recall_at_10_max
value: 23.00791681188562
- type: nauc_recall_at_10_std
value: -21.576198296256532
- type: nauc_recall_at_1_diff1
value: -13.741863044507388
- type: nauc_recall_at_1_max
value: 88.80266056441289
- type: nauc_recall_at_1_std
value: -52.44805080502242
- type: nauc_recall_at_20_diff1
value: -3.8724115673803343
- type: nauc_recall_at_20_max
value: 21.50124528790692
- type: nauc_recall_at_20_std
value: -1.6719812367243132
- type: nauc_recall_at_3_diff1
value: -20.21079163108882
- type: nauc_recall_at_3_max
value: 42.152167178196684
- type: nauc_recall_at_3_std
value: -36.258746145318526
- type: nauc_recall_at_5_diff1
value: -22.10269915203519
- type: nauc_recall_at_5_max
value: 43.30767031613079
- type: nauc_recall_at_5_std
value: -27.398704255640478
- type: ndcg_at_1
value: 6.0
- type: ndcg_at_10
value: 7.432999999999999
- type: ndcg_at_100
value: 26.354
- type: ndcg_at_1000
value: 30.558000000000003
- type: ndcg_at_20
value: 11.143
- type: ndcg_at_3
value: 7.979
- type: ndcg_at_5
value: 6.81
- type: precision_at_1
value: 6.0
- type: precision_at_10
value: 4.2
- type: precision_at_100
value: 3.1199999999999997
- type: precision_at_1000
value: 0.38999999999999996
- type: precision_at_20
value: 4.2
- type: precision_at_3
value: 8.0
- type: precision_at_5
value: 5.6000000000000005
- type: recall_at_1
value: 0.604
- type: recall_at_10
value: 9.678
- type: recall_at_100
value: 78.645
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 20.79
- type: recall_at_3
value: 4.261
- type: recall_at_5
value: 5.011
task:
type: Retrieval
- dataset:
config: default
name: MTEB AILAStatutes (default)
revision: ebfcd844eadd3d667efa3c57fc5c8c87f5c2867e
split: test
type: mteb/AILA_statutes
metrics:
- type: main_score
value: 13.624
- type: map_at_1
value: 1.7999999999999998
- type: map_at_10
value: 6.41
- type: map_at_100
value: 11.995000000000001
- type: map_at_1000
value: 11.995000000000001
- type: map_at_20
value: 7.33
- type: map_at_3
value: 4.089
- type: map_at_5
value: 5.192
- type: mrr_at_1
value: 8.0
- type: mrr_at_10
value: 20.935714285714287
- type: mrr_at_100
value: 23.02755974294914
- type: mrr_at_1000
value: 23.02755974294914
- type: mrr_at_20
value: 22.1038126476207
- type: mrr_at_3
value: 15.333333333333332
- type: mrr_at_5
value: 19.533333333333335
- type: nauc_map_at_1000_diff1
value: 5.278882422253006
- type: nauc_map_at_1000_max
value: 3.7333073133608896
- type: nauc_map_at_1000_std
value: -4.5637189871999775
- type: nauc_map_at_100_diff1
value: 5.278882422253006
- type: nauc_map_at_100_max
value: 3.7333073133608896
- type: nauc_map_at_100_std
value: -4.5637189871999775
- type: nauc_map_at_10_diff1
value: 8.570212263630141
- type: nauc_map_at_10_max
value: -6.6489980060039295
- type: nauc_map_at_10_std
value: -12.162352126704402
- type: nauc_map_at_1_diff1
value: 7.476969859583216
- type: nauc_map_at_1_max
value: -26.629997316876853
- type: nauc_map_at_1_std
value: -23.469874489461308
- type: nauc_map_at_20_diff1
value: 7.222345063366828
- type: nauc_map_at_20_max
value: -2.5103197323267223
- type: nauc_map_at_20_std
value: -10.997015623527455
- type: nauc_map_at_3_diff1
value: 14.924734426277178
- type: nauc_map_at_3_max
value: -11.92937537932614
- type: nauc_map_at_3_std
value: -4.9319666083973255
- type: nauc_map_at_5_diff1
value: 8.080773945621521
- type: nauc_map_at_5_max
value: -3.8175754142607836
- type: nauc_map_at_5_std
value: -4.541639774033337
- type: nauc_mrr_at_1000_diff1
value: 2.4122089783406646
- type: nauc_mrr_at_1000_max
value: -15.876004562207497
- type: nauc_mrr_at_1000_std
value: -12.985028057822372
- type: nauc_mrr_at_100_diff1
value: 2.4122089783406646
- type: nauc_mrr_at_100_max
value: -15.876004562207497
- type: nauc_mrr_at_100_std
value: -12.985028057822372
- type: nauc_mrr_at_10_diff1
value: 0.2857311186354727
- type: nauc_mrr_at_10_max
value: -14.63697545190418
- type: nauc_mrr_at_10_std
value: -12.056570964159198
- type: nauc_mrr_at_1_diff1
value: 6.868795277703242
- type: nauc_mrr_at_1_max
value: -24.845720418567222
- type: nauc_mrr_at_1_std
value: -20.686879527770337
- type: nauc_mrr_at_20_diff1
value: 1.8452171261188577
- type: nauc_mrr_at_20_max
value: -15.538023663956924
- type: nauc_mrr_at_20_std
value: -13.690749771450164
- type: nauc_mrr_at_3_diff1
value: 10.557261573838256
- type: nauc_mrr_at_3_max
value: -20.946427791765498
- type: nauc_mrr_at_3_std
value: -9.815750025468983
- type: nauc_mrr_at_5_diff1
value: 4.101442020672411
- type: nauc_mrr_at_5_max
value: -14.963605604722682
- type: nauc_mrr_at_5_std
value: -9.917384084595511
- type: nauc_ndcg_at_1000_diff1
value: 0.04370368246080858
- type: nauc_ndcg_at_1000_max
value: -0.818088536466922
- type: nauc_ndcg_at_1000_std
value: -4.74569960455296
- type: nauc_ndcg_at_100_diff1
value: 0.04370368246080858
- type: nauc_ndcg_at_100_max
value: -0.818088536466922
- type: nauc_ndcg_at_100_std
value: -4.74569960455296
- type: nauc_ndcg_at_10_diff1
value: 1.2847289677534977
- type: nauc_ndcg_at_10_max
value: -6.3756503900224955
- type: nauc_ndcg_at_10_std
value: -12.98730478286347
- type: nauc_ndcg_at_1_diff1
value: 6.868795277703242
- type: nauc_ndcg_at_1_max
value: -24.845720418567222
- type: nauc_ndcg_at_1_std
value: -20.686879527770337
- type: nauc_ndcg_at_20_diff1
value: 0.777375339231765
- type: nauc_ndcg_at_20_max
value: -0.9649148688381876
- type: nauc_ndcg_at_20_std
value: -14.374528790697976
- type: nauc_ndcg_at_3_diff1
value: 11.34233767766492
- type: nauc_ndcg_at_3_max
value: -13.185097340604685
- type: nauc_ndcg_at_3_std
value: -1.42817114044502
- type: nauc_ndcg_at_5_diff1
value: 3.6861855424314394
- type: nauc_ndcg_at_5_max
value: -3.8049446945965877
- type: nauc_ndcg_at_5_std
value: -3.627047155464453
- type: nauc_precision_at_1000_diff1
value: -23.534146832293555
- type: nauc_precision_at_1000_max
value: 7.621521743107654
- type: nauc_precision_at_1000_std
value: 31.79231993560317
- type: nauc_precision_at_100_diff1
value: -23.534146832293136
- type: nauc_precision_at_100_max
value: 7.6215217431077615
- type: nauc_precision_at_100_std
value: 31.792319935603174
- type: nauc_precision_at_10_diff1
value: -9.295902835532825
- type: nauc_precision_at_10_max
value: -3.516562838357381
- type: nauc_precision_at_10_std
value: -9.542266229384722
- type: nauc_precision_at_1_diff1
value: 6.868795277703242
- type: nauc_precision_at_1_max
value: -24.845720418567222
- type: nauc_precision_at_1_std
value: -20.686879527770337
- type: nauc_precision_at_20_diff1
value: -9.74438544160727
- type: nauc_precision_at_20_max
value: 8.895012105242024
- type: nauc_precision_at_20_std
value: -10.653950589210957
- type: nauc_precision_at_3_diff1
value: 8.920936116382022
- type: nauc_precision_at_3_max
value: -10.246679316888065
- type: nauc_precision_at_3_std
value: 5.611638203668553
- type: nauc_precision_at_5_diff1
value: -8.265025821338345
- type: nauc_precision_at_5_max
value: 7.359630809801093
- type: nauc_precision_at_5_std
value: 7.003625975167535
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: -1.798034642140945
- type: nauc_recall_at_10_max
value: 0.6924952930762724
- type: nauc_recall_at_10_std
value: -13.706398349868037
- type: nauc_recall_at_1_diff1
value: 7.476969859583216
- type: nauc_recall_at_1_max
value: -26.629997316876853
- type: nauc_recall_at_1_std
value: -23.469874489461308
- type: nauc_recall_at_20_diff1
value: -2.659819202817919
- type: nauc_recall_at_20_max
value: 10.517274540935807
- type: nauc_recall_at_20_std
value: -14.235421011543991
- type: nauc_recall_at_3_diff1
value: 15.662853297442803
- type: nauc_recall_at_3_max
value: -11.663877606927189
- type: nauc_recall_at_3_std
value: -2.341470241427359
- type: nauc_recall_at_5_diff1
value: 2.273326115596832
- type: nauc_recall_at_5_max
value: 2.8669632025879537
- type: nauc_recall_at_5_std
value: -0.3450165007891684
- type: ndcg_at_1
value: 8.0
- type: ndcg_at_10
value: 13.624
- type: ndcg_at_100
value: 38.109
- type: ndcg_at_1000
value: 38.109
- type: ndcg_at_20
value: 16.907
- type: ndcg_at_3
value: 9.45
- type: ndcg_at_5
value: 10.598
- type: precision_at_1
value: 8.0
- type: precision_at_10
value: 7.3999999999999995
- type: precision_at_100
value: 4.34
- type: precision_at_1000
value: 0.434
- type: precision_at_20
value: 5.5
- type: precision_at_3
value: 10.0
- type: precision_at_5
value: 10.0
- type: recall_at_1
value: 1.7999999999999998
- type: recall_at_10
value: 18.333
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 26.333000000000002
- type: recall_at_3
value: 7.867
- type: recall_at_5
value: 12.333
task:
type: Retrieval
- dataset:
config: default
name: MTEB ARCChallenge (default)
revision: c481e0da3dcbbad8bce7721dea9085b74320a0a3
split: test
type: RAR-b/ARC-Challenge
metrics:
- type: main_score
value: 3.8449999999999998
- type: map_at_1
value: 1.536
- type: map_at_10
value: 2.902
- type: map_at_100
value: 3.2259999999999995
- type: map_at_1000
value: 3.309
- type: map_at_20
value: 3.061
- type: map_at_3
value: 2.204
- type: map_at_5
value: 2.656
- type: mrr_at_1
value: 1.5358361774744027
- type: mrr_at_10
value: 2.902107373097134
- type: mrr_at_100
value: 3.2259697277173585
- type: mrr_at_1000
value: 3.309141234079007
- type: mrr_at_20
value: 3.0608339226581975
- type: mrr_at_3
value: 2.204209328782707
- type: mrr_at_5
value: 2.6564277588168363
- type: nauc_map_at_1000_diff1
value: 6.6349335671175
- type: nauc_map_at_1000_max
value: 10.045752081479547
- type: nauc_map_at_1000_std
value: 5.17373675499246
- type: nauc_map_at_100_diff1
value: 6.6240618235225135
- type: nauc_map_at_100_max
value: 10.244151375429777
- type: nauc_map_at_100_std
value: 5.305639061848512
- type: nauc_map_at_10_diff1
value: 7.5024069352343
- type: nauc_map_at_10_max
value: 11.928684625428838
- type: nauc_map_at_10_std
value: 5.016380398843673
- type: nauc_map_at_1_diff1
value: 17.26912687174127
- type: nauc_map_at_1_max
value: 6.265273970269121
- type: nauc_map_at_1_std
value: -4.8796731336600825
- type: nauc_map_at_20_diff1
value: 7.120932496690847
- type: nauc_map_at_20_max
value: 11.15762860873897
- type: nauc_map_at_20_std
value: 5.342837705336892
- type: nauc_map_at_3_diff1
value: 7.138259469017607
- type: nauc_map_at_3_max
value: 8.348409228816523
- type: nauc_map_at_3_std
value: 6.767314043423357
- type: nauc_map_at_5_diff1
value: 7.239963996009633
- type: nauc_map_at_5_max
value: 11.068225118567208
- type: nauc_map_at_5_std
value: 5.0851302044955835
- type: nauc_mrr_at_1000_diff1
value: 6.6349335671175
- type: nauc_mrr_at_1000_max
value: 10.045752081479547
- type: nauc_mrr_at_1000_std
value: 5.17373675499246
- type: nauc_mrr_at_100_diff1
value: 6.6240618235225135
- type: nauc_mrr_at_100_max
value: 10.244151375429777
- type: nauc_mrr_at_100_std
value: 5.305639061848512
- type: nauc_mrr_at_10_diff1
value: 7.5024069352343
- type: nauc_mrr_at_10_max
value: 11.928684625428838
- type: nauc_mrr_at_10_std
value: 5.016380398843673
- type: nauc_mrr_at_1_diff1
value: 17.26912687174127
- type: nauc_mrr_at_1_max
value: 6.265273970269121
- type: nauc_mrr_at_1_std
value: -4.8796731336600825
- type: nauc_mrr_at_20_diff1
value: 7.120932496690847
- type: nauc_mrr_at_20_max
value: 11.15762860873897
- type: nauc_mrr_at_20_std
value: 5.342837705336892
- type: nauc_mrr_at_3_diff1
value: 7.138259469017607
- type: nauc_mrr_at_3_max
value: 8.348409228816523
- type: nauc_mrr_at_3_std
value: 6.767314043423357
- type: nauc_mrr_at_5_diff1
value: 7.239963996009633
- type: nauc_mrr_at_5_max
value: 11.068225118567208
- type: nauc_mrr_at_5_std
value: 5.0851302044955835
- type: nauc_ndcg_at_1000_diff1
value: 3.49547273108029
- type: nauc_ndcg_at_1000_max
value: 4.987679792326471
- type: nauc_ndcg_at_1000_std
value: 4.792386661474078
- type: nauc_ndcg_at_100_diff1
value: 3.423765430486521
- type: nauc_ndcg_at_100_max
value: 7.215346434617728
- type: nauc_ndcg_at_100_std
value: 6.1334416812657055
- type: nauc_ndcg_at_10_diff1
value: 6.211453661355799
- type: nauc_ndcg_at_10_max
value: 13.686949611790244
- type: nauc_ndcg_at_10_std
value: 5.334521959588366
- type: nauc_ndcg_at_1_diff1
value: 17.26912687174127
- type: nauc_ndcg_at_1_max
value: 6.265273970269121
- type: nauc_ndcg_at_1_std
value: -4.8796731336600825
- type: nauc_ndcg_at_20_diff1
value: 5.269692894653953
- type: nauc_ndcg_at_20_max
value: 11.466483119515134
- type: nauc_ndcg_at_20_std
value: 6.208531132010362
- type: nauc_ndcg_at_3_diff1
value: 4.841534563021528
- type: nauc_ndcg_at_3_max
value: 8.715299190678648
- type: nauc_ndcg_at_3_std
value: 8.889648909403514
- type: nauc_ndcg_at_5_diff1
value: 5.5149763431777385
- type: nauc_ndcg_at_5_max
value: 12.41579830649011
- type: nauc_ndcg_at_5_std
value: 5.8568738487427865
- type: nauc_precision_at_1000_diff1
value: 1.0890041942217588
- type: nauc_precision_at_1000_max
value: -1.074889035912781
- type: nauc_precision_at_1000_std
value: 3.7386321369399207
- type: nauc_precision_at_100_diff1
value: 0.24898034725209317
- type: nauc_precision_at_100_max
value: 2.6625432444853345
- type: nauc_precision_at_100_std
value: 6.760865885892171
- type: nauc_precision_at_10_diff1
value: 4.728605530960451
- type: nauc_precision_at_10_max
value: 16.098011324014156
- type: nauc_precision_at_10_std
value: 5.294918338481019
- type: nauc_precision_at_1_diff1
value: 17.26912687174127
- type: nauc_precision_at_1_max
value: 6.265273970269121
- type: nauc_precision_at_1_std
value: -4.8796731336600825
- type: nauc_precision_at_20_diff1
value: 3.1605384012118063
- type: nauc_precision_at_20_max
value: 11.228945826678288
- type: nauc_precision_at_20_std
value: 7.0587619686895975
- type: nauc_precision_at_3_diff1
value: 0.15384889210192554
- type: nauc_precision_at_3_max
value: 9.441612052649862
- type: nauc_precision_at_3_std
value: 13.110663421557597
- type: nauc_precision_at_5_diff1
value: 2.9177590765544803
- type: nauc_precision_at_5_max
value: 14.583883090410385
- type: nauc_precision_at_5_std
value: 6.761154902844139
- type: nauc_recall_at_1000_diff1
value: 1.0890041942217838
- type: nauc_recall_at_1000_max
value: -1.0748890359127414
- type: nauc_recall_at_1000_std
value: 3.7386321369399447
- type: nauc_recall_at_100_diff1
value: 0.2489803472520955
- type: nauc_recall_at_100_max
value: 2.6625432444853385
- type: nauc_recall_at_100_std
value: 6.7608658858921835
- type: nauc_recall_at_10_diff1
value: 4.728605530960435
- type: nauc_recall_at_10_max
value: 16.09801132401412
- type: nauc_recall_at_10_std
value: 5.294918338481006
- type: nauc_recall_at_1_diff1
value: 17.26912687174127
- type: nauc_recall_at_1_max
value: 6.265273970269121
- type: nauc_recall_at_1_std
value: -4.8796731336600825
- type: nauc_recall_at_20_diff1
value: 3.1605384012117814
- type: nauc_recall_at_20_max
value: 11.22894582667827
- type: nauc_recall_at_20_std
value: 7.0587619686895655
- type: nauc_recall_at_3_diff1
value: 0.15384889210195152
- type: nauc_recall_at_3_max
value: 9.441612052649868
- type: nauc_recall_at_3_std
value: 13.110663421557629
- type: nauc_recall_at_5_diff1
value: 2.917759076554466
- type: nauc_recall_at_5_max
value: 14.583883090410346
- type: nauc_recall_at_5_std
value: 6.761154902844119
- type: ndcg_at_1
value: 1.536
- type: ndcg_at_10
value: 3.8449999999999998
- type: ndcg_at_100
value: 5.772
- type: ndcg_at_1000
value: 8.509
- type: ndcg_at_20
value: 4.426
- type: ndcg_at_3
value: 2.447
- type: ndcg_at_5
value: 3.258
- type: precision_at_1
value: 1.536
- type: precision_at_10
value: 0.6910000000000001
- type: precision_at_100
value: 0.168
- type: precision_at_1000
value: 0.04
- type: precision_at_20
value: 0.461
- type: precision_at_3
value: 1.052
- type: precision_at_5
value: 1.024
- type: recall_at_1
value: 1.536
- type: recall_at_10
value: 6.9110000000000005
- type: recall_at_100
value: 16.808999999999997
- type: recall_at_1000
value: 39.505
- type: recall_at_20
value: 9.215
- type: recall_at_3
value: 3.157
- type: recall_at_5
value: 5.119
task:
type: Retrieval
- dataset:
config: default
name: MTEB AlphaNLI (default)
revision: 303f40ef3d50918d3dc43577d33f2f7344ad72c1
split: test
type: RAR-b/alphanli
metrics:
- type: main_score
value: 14.155000000000001
- type: map_at_1
value: 8.616
- type: map_at_10
value: 12.151
- type: map_at_100
value: 12.713
- type: map_at_1000
value: 12.790000000000001
- type: map_at_20
value: 12.478
- type: map_at_3
value: 10.955
- type: map_at_5
value: 11.68
- type: mrr_at_1
value: 8.616187989556137
- type: mrr_at_10
value: 12.151197728873969
- type: mrr_at_100
value: 12.713435989405935
- type: mrr_at_1000
value: 12.789534083463522
- type: mrr_at_20
value: 12.478389119397455
- type: mrr_at_3
value: 10.955178416013926
- type: mrr_at_5
value: 11.679721496953876
- type: nauc_map_at_1000_diff1
value: 38.986525912703435
- type: nauc_map_at_1000_max
value: 12.219692225747707
- type: nauc_map_at_1000_std
value: 1.2585343212684903
- type: nauc_map_at_100_diff1
value: 39.02868722054371
- type: nauc_map_at_100_max
value: 12.248003227250122
- type: nauc_map_at_100_std
value: 1.2163208553030314
- type: nauc_map_at_10_diff1
value: 40.110717683039525
- type: nauc_map_at_10_max
value: 12.78605835422205
- type: nauc_map_at_10_std
value: 0.6481692151906001
- type: nauc_map_at_1_diff1
value: 48.456097345786745
- type: nauc_map_at_1_max
value: 14.981869102701411
- type: nauc_map_at_1_std
value: -3.0707717911327226
- type: nauc_map_at_20_diff1
value: 39.42161381753684
- type: nauc_map_at_20_max
value: 12.341429085851182
- type: nauc_map_at_20_std
value: 0.8391480542456798
- type: nauc_map_at_3_diff1
value: 42.64699229741736
- type: nauc_map_at_3_max
value: 13.681396294884618
- type: nauc_map_at_3_std
value: -1.3518984290812146
- type: nauc_map_at_5_diff1
value: 41.32077190616691
- type: nauc_map_at_5_max
value: 13.136429689834436
- type: nauc_map_at_5_std
value: 0.32856286589434136
- type: nauc_mrr_at_1000_diff1
value: 38.98652591920884
- type: nauc_mrr_at_1000_max
value: 12.219692104355413
- type: nauc_mrr_at_1000_std
value: 1.2585339367622461
- type: nauc_mrr_at_100_diff1
value: 39.02868722054371
- type: nauc_mrr_at_100_max
value: 12.248003227250122
- type: nauc_mrr_at_100_std
value: 1.2163208553030314
- type: nauc_mrr_at_10_diff1
value: 40.110717683039525
- type: nauc_mrr_at_10_max
value: 12.78605835422205
- type: nauc_mrr_at_10_std
value: 0.6481692151906001
- type: nauc_mrr_at_1_diff1
value: 48.456097345786745
- type: nauc_mrr_at_1_max
value: 14.981869102701411
- type: nauc_mrr_at_1_std
value: -3.0707717911327226
- type: nauc_mrr_at_20_diff1
value: 39.42161381753684
- type: nauc_mrr_at_20_max
value: 12.341429085851182
- type: nauc_mrr_at_20_std
value: 0.8391480542456798
- type: nauc_mrr_at_3_diff1
value: 42.64699229741736
- type: nauc_mrr_at_3_max
value: 13.681396294884618
- type: nauc_mrr_at_3_std
value: -1.3518984290812146
- type: nauc_mrr_at_5_diff1
value: 41.32077190616691
- type: nauc_mrr_at_5_max
value: 13.136429689834436
- type: nauc_mrr_at_5_std
value: 0.32856286589434136
- type: nauc_ndcg_at_1000_diff1
value: 31.611075970442926
- type: nauc_ndcg_at_1000_max
value: 9.936393145930218
- type: nauc_ndcg_at_1000_std
value: 6.71067891152211
- type: nauc_ndcg_at_100_diff1
value: 32.58290081795884
- type: nauc_ndcg_at_100_max
value: 9.842659588765363
- type: nauc_ndcg_at_100_std
value: 5.498554329517975
- type: nauc_ndcg_at_10_diff1
value: 36.75293874754393
- type: nauc_ndcg_at_10_max
value: 11.803286140726776
- type: nauc_ndcg_at_10_std
value: 2.5976940855692074
- type: nauc_ndcg_at_1_diff1
value: 48.456097345786745
- type: nauc_ndcg_at_1_max
value: 14.981869102701411
- type: nauc_ndcg_at_1_std
value: -3.0707717911327226
- type: nauc_ndcg_at_20_diff1
value: 34.638144952713866
- type: nauc_ndcg_at_20_max
value: 10.449640737261305
- type: nauc_ndcg_at_20_std
value: 3.2195824007114675
- type: nauc_ndcg_at_3_diff1
value: 41.24511499401773
- type: nauc_ndcg_at_3_max
value: 13.384003644595388
- type: nauc_ndcg_at_3_std
value: -0.7628562047692254
- type: nauc_ndcg_at_5_diff1
value: 39.2155849544026
- type: nauc_ndcg_at_5_max
value: 12.577199638671265
- type: nauc_ndcg_at_5_std
value: 2.0185641778476127
- type: nauc_precision_at_1000_diff1
value: 11.879578040836442
- type: nauc_precision_at_1000_max
value: 5.358855936542234
- type: nauc_precision_at_1000_std
value: 23.471172109373907
- type: nauc_precision_at_100_diff1
value: 18.24569021314919
- type: nauc_precision_at_100_max
value: 4.309548949123852
- type: nauc_precision_at_100_std
value: 15.884619703445772
- type: nauc_precision_at_10_diff1
value: 29.512994402519226
- type: nauc_precision_at_10_max
value: 9.634695132770453
- type: nauc_precision_at_10_std
value: 6.795536654948908
- type: nauc_precision_at_1_diff1
value: 48.456097345786745
- type: nauc_precision_at_1_max
value: 14.981869102701411
- type: nauc_precision_at_1_std
value: -3.0707717911327226
- type: nauc_precision_at_20_diff1
value: 24.18871405534599
- type: nauc_precision_at_20_max
value: 6.090279031407053
- type: nauc_precision_at_20_std
value: 8.291882200513058
- type: nauc_precision_at_3_diff1
value: 37.926451300682054
- type: nauc_precision_at_3_max
value: 12.684618853985219
- type: nauc_precision_at_3_std
value: 0.6806740647349011
- type: nauc_precision_at_5_diff1
value: 34.550519136938384
- type: nauc_precision_at_5_max
value: 11.344674575354038
- type: nauc_precision_at_5_std
value: 5.985578706127787
- type: nauc_recall_at_1000_diff1
value: 11.879578040836519
- type: nauc_recall_at_1000_max
value: 5.358855936542304
- type: nauc_recall_at_1000_std
value: 23.47117210937398
- type: nauc_recall_at_100_diff1
value: 18.245690213149167
- type: nauc_recall_at_100_max
value: 4.3095489491238155
- type: nauc_recall_at_100_std
value: 15.88461970344576
- type: nauc_recall_at_10_diff1
value: 29.512994402519215
- type: nauc_recall_at_10_max
value: 9.634695132770442
- type: nauc_recall_at_10_std
value: 6.795536654948889
- type: nauc_recall_at_1_diff1
value: 48.456097345786745
- type: nauc_recall_at_1_max
value: 14.981869102701411
- type: nauc_recall_at_1_std
value: -3.0707717911327226
- type: nauc_recall_at_20_diff1
value: 24.188714055346
- type: nauc_recall_at_20_max
value: 6.09027903140705
- type: nauc_recall_at_20_std
value: 8.291882200513056
- type: nauc_recall_at_3_diff1
value: 37.92645130068206
- type: nauc_recall_at_3_max
value: 12.684618853985235
- type: nauc_recall_at_3_std
value: 0.6806740647349308
- type: nauc_recall_at_5_diff1
value: 34.55051913693838
- type: nauc_recall_at_5_max
value: 11.344674575354015
- type: nauc_recall_at_5_std
value: 5.985578706127789
- type: ndcg_at_1
value: 8.616
- type: ndcg_at_10
value: 14.155000000000001
- type: ndcg_at_100
value: 17.102
- type: ndcg_at_1000
value: 19.631
- type: ndcg_at_20
value: 15.344
- type: ndcg_at_3
value: 11.728
- type: ndcg_at_5
value: 13.025999999999998
- type: precision_at_1
value: 8.616
- type: precision_at_10
value: 2.056
- type: precision_at_100
value: 0.349
- type: precision_at_1000
value: 0.055999999999999994
- type: precision_at_20
value: 1.2630000000000001
- type: precision_at_3
value: 4.656
- type: precision_at_5
value: 3.42
- type: recall_at_1
value: 8.616
- type: recall_at_10
value: 20.561
- type: recall_at_100
value: 34.855999999999995
- type: recall_at_1000
value: 55.875
- type: recall_at_20
value: 25.261
- type: recall_at_3
value: 13.969000000000001
- type: recall_at_5
value: 17.102
task:
type: Retrieval
- dataset:
config: default
name: MTEB AmazonPolarityClassification (default)
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 68.359575
- type: ap
value: 63.04430514461716
- type: ap_weighted
value: 63.04430514461716
- type: f1
value: 68.12645282836293
- type: f1_weighted
value: 68.12645282836293
- type: main_score
value: 68.359575
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna (default)
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: main_score
value: 32.031
- type: map_at_1
value: 15.363
- type: map_at_10
value: 25.629999999999995
- type: map_at_100
value: 26.851999999999997
- type: map_at_1000
value: 26.916
- type: map_at_20
value: 26.401999999999997
- type: map_at_3
value: 21.764
- type: map_at_5
value: 23.798
- type: mrr_at_1
value: 15.647226173541965
- type: mrr_at_10
value: 25.74270699270699
- type: mrr_at_100
value: 26.95759156481371
- type: mrr_at_1000
value: 27.02192945787223
- type: mrr_at_20
value: 26.50752832488611
- type: mrr_at_3
value: 21.894262683736372
- type: mrr_at_5
value: 23.889284020862938
- type: nauc_map_at_1000_diff1
value: 9.717094498857836
- type: nauc_map_at_1000_max
value: 0.006128824635771366
- type: nauc_map_at_1000_std
value: 9.951724867994008
- type: nauc_map_at_100_diff1
value: 9.720746167116648
- type: nauc_map_at_100_max
value: 0.03921480687966482
- type: nauc_map_at_100_std
value: 10.01422840642898
- type: nauc_map_at_10_diff1
value: 9.629884802439925
- type: nauc_map_at_10_max
value: -0.18895622006721804
- type: nauc_map_at_10_std
value: 8.801754758016564
- type: nauc_map_at_1_diff1
value: 10.255415606776134
- type: nauc_map_at_1_max
value: -2.7429221309654044
- type: nauc_map_at_1_std
value: 6.866297123270523
- type: nauc_map_at_20_diff1
value: 9.707948736975794
- type: nauc_map_at_20_max
value: 0.01892213753638095
- type: nauc_map_at_20_std
value: 9.681790764357237
- type: nauc_map_at_3_diff1
value: 8.344213156710568
- type: nauc_map_at_3_max
value: -2.0132121856529483
- type: nauc_map_at_3_std
value: 8.554071405515435
- type: nauc_map_at_5_diff1
value: 9.14495583661473
- type: nauc_map_at_5_max
value: -1.379873148644914
- type: nauc_map_at_5_std
value: 9.044652095982553
- type: nauc_mrr_at_1000_diff1
value: 8.520276824384093
- type: nauc_mrr_at_1000_max
value: -0.41053299382643904
- type: nauc_mrr_at_1000_std
value: 9.770616411797125
- type: nauc_mrr_at_100_diff1
value: 8.526357726757498
- type: nauc_mrr_at_100_max
value: -0.37675957362198204
- type: nauc_mrr_at_100_std
value: 9.833172972935825
- type: nauc_mrr_at_10_diff1
value: 8.504469942302443
- type: nauc_mrr_at_10_max
value: -0.5555290478828475
- type: nauc_mrr_at_10_std
value: 8.67347986151777
- type: nauc_mrr_at_1_diff1
value: 8.924965691375194
- type: nauc_mrr_at_1_max
value: -2.472212128016505
- type: nauc_mrr_at_1_std
value: 6.727737069169365
- type: nauc_mrr_at_20_diff1
value: 8.527008337552795
- type: nauc_mrr_at_20_max
value: -0.39130673567011953
- type: nauc_mrr_at_20_std
value: 9.504234612175194
- type: nauc_mrr_at_3_diff1
value: 7.028185998793612
- type: nauc_mrr_at_3_max
value: -2.531551924396665
- type: nauc_mrr_at_3_std
value: 8.36654956798548
- type: nauc_mrr_at_5_diff1
value: 7.946200662893088
- type: nauc_mrr_at_5_max
value: -1.8450232157342275
- type: nauc_mrr_at_5_std
value: 8.855536533297968
- type: nauc_ndcg_at_1000_diff1
value: 10.148046270962398
- type: nauc_ndcg_at_1000_max
value: 1.696424601847897
- type: nauc_ndcg_at_1000_std
value: 13.134595506556405
- type: nauc_ndcg_at_100_diff1
value: 10.478061817612778
- type: nauc_ndcg_at_100_max
value: 2.790758084465661
- type: nauc_ndcg_at_100_std
value: 14.964733623242607
- type: nauc_ndcg_at_10_diff1
value: 10.372927964606154
- type: nauc_ndcg_at_10_max
value: 1.9588405301435734
- type: nauc_ndcg_at_10_std
value: 9.558148538160015
- type: nauc_ndcg_at_1_diff1
value: 10.255415606776134
- type: nauc_ndcg_at_1_max
value: -2.7429221309654044
- type: nauc_ndcg_at_1_std
value: 6.866297123270523
- type: nauc_ndcg_at_20_diff1
value: 10.807055510827903
- type: nauc_ndcg_at_20_max
value: 2.873981784514884
- type: nauc_ndcg_at_20_std
value: 12.684265114648849
- type: nauc_ndcg_at_3_diff1
value: 7.99043332908002
- type: nauc_ndcg_at_3_max
value: -1.7537467389545258
- type: nauc_ndcg_at_3_std
value: 9.282365459725794
- type: nauc_ndcg_at_5_diff1
value: 9.291919447241343
- type: nauc_ndcg_at_5_max
value: -0.6986840661830845
- type: nauc_ndcg_at_5_std
value: 10.155119795280289
- type: nauc_precision_at_1000_diff1
value: 5.534567864242971
- type: nauc_precision_at_1000_max
value: 9.529106078051697
- type: nauc_precision_at_1000_std
value: 62.0873447350283
- type: nauc_precision_at_100_diff1
value: 13.636774071684679
- type: nauc_precision_at_100_max
value: 17.905397264353912
- type: nauc_precision_at_100_std
value: 49.22170039944941
- type: nauc_precision_at_10_diff1
value: 12.676219389202528
- type: nauc_precision_at_10_max
value: 8.164707652448252
- type: nauc_precision_at_10_std
value: 11.361740427515855
- type: nauc_precision_at_1_diff1
value: 10.255415606776134
- type: nauc_precision_at_1_max
value: -2.7429221309654044
- type: nauc_precision_at_1_std
value: 6.866297123270523
- type: nauc_precision_at_20_diff1
value: 15.006293628353006
- type: nauc_precision_at_20_max
value: 12.931321039045368
- type: nauc_precision_at_20_std
value: 23.758750045585586
- type: nauc_precision_at_3_diff1
value: 7.18325478518931
- type: nauc_precision_at_3_max
value: -1.1161637595134446
- type: nauc_precision_at_3_std
value: 11.09645301286272
- type: nauc_precision_at_5_diff1
value: 9.780765614595015
- type: nauc_precision_at_5_max
value: 1.0082157901430149
- type: nauc_precision_at_5_std
value: 12.92929121494741
- type: nauc_recall_at_1000_diff1
value: 5.534567864242688
- type: nauc_recall_at_1000_max
value: 9.529106078051411
- type: nauc_recall_at_1000_std
value: 62.08734473502826
- type: nauc_recall_at_100_diff1
value: 13.63677407168474
- type: nauc_recall_at_100_max
value: 17.905397264353898
- type: nauc_recall_at_100_std
value: 49.2217003994493
- type: nauc_recall_at_10_diff1
value: 12.676219389202512
- type: nauc_recall_at_10_max
value: 8.164707652448225
- type: nauc_recall_at_10_std
value: 11.361740427515835
- type: nauc_recall_at_1_diff1
value: 10.255415606776134
- type: nauc_recall_at_1_max
value: -2.7429221309654044
- type: nauc_recall_at_1_std
value: 6.866297123270523
- type: nauc_recall_at_20_diff1
value: 15.006293628353069
- type: nauc_recall_at_20_max
value: 12.931321039045434
- type: nauc_recall_at_20_std
value: 23.75875004558557
- type: nauc_recall_at_3_diff1
value: 7.183254785189315
- type: nauc_recall_at_3_max
value: -1.1161637595134306
- type: nauc_recall_at_3_std
value: 11.096453012862733
- type: nauc_recall_at_5_diff1
value: 9.780765614595012
- type: nauc_recall_at_5_max
value: 1.008215790143006
- type: nauc_recall_at_5_std
value: 12.929291214947403
- type: ndcg_at_1
value: 15.363
- type: ndcg_at_10
value: 32.031
- type: ndcg_at_100
value: 38.122
- type: ndcg_at_1000
value: 39.864
- type: ndcg_at_20
value: 34.849999999999994
- type: ndcg_at_3
value: 23.965
- type: ndcg_at_5
value: 27.659
- type: precision_at_1
value: 15.363
- type: precision_at_10
value: 5.277
- type: precision_at_100
value: 0.8170000000000001
- type: precision_at_1000
value: 0.095
- type: precision_at_20
value: 3.197
- type: precision_at_3
value: 10.123
- type: precision_at_5
value: 7.881
- type: recall_at_1
value: 15.363
- type: recall_at_10
value: 52.774
- type: recall_at_100
value: 81.65
- type: recall_at_1000
value: 95.448
- type: recall_at_20
value: 63.94
- type: recall_at_3
value: 30.37
- type: recall_at_5
value: 39.403
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClassification (default)
revision: f9bd92144ed76200d6eb3ce73a8bd4eba9ffdc85
split: test
type: ccdv/arxiv-classification
metrics:
- type: accuracy
value: 43.611999999999995
- type: f1
value: 40.930383763906484
- type: f1_weighted
value: 41.404367816744276
- type: main_score
value: 43.611999999999995
task:
type: Classification
- dataset:
config: default
name: MTEB ArxivClusteringP2P (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 24.827354215343842
- type: v_measure
value: 24.827354215343842
- type: v_measure_std
value: 14.761042346861815
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringP2P.v2 (default)
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: main_score
value: 29.14326814807588
- type: v_measure
value: 29.14326814807588
- type: v_measure_std
value: 16.354623518770328
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S (default)
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: main_score
value: 16.681456170594032
- type: v_measure
value: 16.681456170594032
- type: v_measure_std
value: 15.806408628434077
task:
type: Clustering
- dataset:
config: default
name: MTEB Banking77Classification (default)
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 59.86363636363635
- type: f1
value: 58.3300719763065
- type: f1_weighted
value: 58.3300719763065
- type: main_score
value: 59.86363636363635
task:
type: Classification
- dataset:
config: default
name: MTEB BigPatentClustering (default)
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
split: test
type: jinaai/big-patent-clustering
metrics:
- type: main_score
value: 17.208517091148714
- type: v_measure
value: 17.208517091148714
- type: v_measure_std
value: 0.698644666463382
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringP2P (default)
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: main_score
value: 19.998032819841395
- type: v_measure
value: 19.998032819841395
- type: v_measure_std
value: 0.7272995954630507
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S (default)
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: main_score
value: 12.672050490076508
- type: v_measure
value: 12.672050490076508
- type: v_measure_std
value: 0.7252965151579489
task:
type: Clustering
- dataset:
config: default
name: MTEB CEDRClassification (default)
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
split: test
type: ai-forever/cedr-classification
metrics:
- type: accuracy
value: 38.95324123273113
- type: f1
value: 30.695742042129776
- type: lrap
value: 64.53134962805646
- type: main_score
value: 38.95324123273113
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB CPUSpeedTask (default)
revision: '1.0'
split: test
type: 'CPUSpeedTask'
metrics:
- type: avg_words_per_sec
value: 1171249.8059068616
- type: main_score
value: 1171249.8059068616
- type: physical_cores
value: 3600
- type: time_mean
value: 31.018148149762837
- type: time_std
value: 10.887230129351211
- type: total_cores
value: 7200
task:
type: Speed
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval (default)
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: main_score
value: 27.686
- type: map_at_1
value: 17.864
- type: map_at_10
value: 23.842
- type: map_at_100
value: 24.648999999999997
- type: map_at_1000
value: 24.771
- type: map_at_20
value: 24.277
- type: map_at_3
value: 21.938
- type: map_at_5
value: 23.058999999999997
- type: mrr_at_1
value: 21.888412017167383
- type: mrr_at_10
value: 27.934691282330764
- type: mrr_at_100
value: 28.58815942555481
- type: mrr_at_1000
value: 28.669575168001604
- type: mrr_at_20
value: 28.259041893075693
- type: mrr_at_3
value: 25.96566523605151
- type: mrr_at_5
value: 27.145922746781114
- type: nauc_map_at_1000_diff1
value: 38.9362657863528
- type: nauc_map_at_1000_max
value: 26.39064664437522
- type: nauc_map_at_1000_std
value: -0.3507878980807277
- type: nauc_map_at_100_diff1
value: 38.9305380779697
- type: nauc_map_at_100_max
value: 26.37667481671251
- type: nauc_map_at_100_std
value: -0.4107785241043359
- type: nauc_map_at_10_diff1
value: 38.90352635552967
- type: nauc_map_at_10_max
value: 26.04843561328241
- type: nauc_map_at_10_std
value: -1.0213929777227249
- type: nauc_map_at_1_diff1
value: 44.891250111700664
- type: nauc_map_at_1_max
value: 27.415379429330695
- type: nauc_map_at_1_std
value: -2.083016588225919
- type: nauc_map_at_20_diff1
value: 38.94728598104626
- type: nauc_map_at_20_max
value: 26.321985371933916
- type: nauc_map_at_20_std
value: -0.6740389120283213
- type: nauc_map_at_3_diff1
value: 40.75408309900131
- type: nauc_map_at_3_max
value: 26.81466083992981
- type: nauc_map_at_3_std
value: -1.3446416472047542
- type: nauc_map_at_5_diff1
value: 39.55391899732806
- type: nauc_map_at_5_max
value: 26.73952942989369
- type: nauc_map_at_5_std
value: -0.9241166864360354
- type: nauc_mrr_at_1000_diff1
value: 37.49322259212407
- type: nauc_mrr_at_1000_max
value: 26.791861376982645
- type: nauc_mrr_at_1000_std
value: -0.12058632966589165
- type: nauc_mrr_at_100_diff1
value: 37.47912707778518
- type: nauc_mrr_at_100_max
value: 26.780040228801354
- type: nauc_mrr_at_100_std
value: -0.13375233513915044
- type: nauc_mrr_at_10_diff1
value: 37.44982182358103
- type: nauc_mrr_at_10_max
value: 26.579194370161574
- type: nauc_mrr_at_10_std
value: -0.5519796223426987
- type: nauc_mrr_at_1_diff1
value: 43.78241372037574
- type: nauc_mrr_at_1_max
value: 29.62575208874629
- type: nauc_mrr_at_1_std
value: -0.7403872780711277
- type: nauc_mrr_at_20_diff1
value: 37.413002156119
- type: nauc_mrr_at_20_max
value: 26.71157844066263
- type: nauc_mrr_at_20_std
value: -0.3418018168926074
- type: nauc_mrr_at_3_diff1
value: 39.36718212836755
- type: nauc_mrr_at_3_max
value: 27.755919798148643
- type: nauc_mrr_at_3_std
value: -0.5118015715447669
- type: nauc_mrr_at_5_diff1
value: 38.108343388995614
- type: nauc_mrr_at_5_max
value: 27.255156457755536
- type: nauc_mrr_at_5_std
value: -0.33152296202161974
- type: nauc_ndcg_at_1000_diff1
value: 35.45874849790142
- type: nauc_ndcg_at_1000_max
value: 26.06624958789977
- type: nauc_ndcg_at_1000_std
value: 2.8510315350747746
- type: nauc_ndcg_at_100_diff1
value: 35.22563491603818
- type: nauc_ndcg_at_100_max
value: 25.482125642505167
- type: nauc_ndcg_at_100_std
value: 1.7230614371120136
- type: nauc_ndcg_at_10_diff1
value: 35.442027092978336
- type: nauc_ndcg_at_10_max
value: 24.43872310681677
- type: nauc_ndcg_at_10_std
value: -0.8836727526012238
- type: nauc_ndcg_at_1_diff1
value: 43.78241372037574
- type: nauc_ndcg_at_1_max
value: 29.62575208874629
- type: nauc_ndcg_at_1_std
value: -0.7403872780711277
- type: nauc_ndcg_at_20_diff1
value: 35.532620958116226
- type: nauc_ndcg_at_20_max
value: 24.9995407161472
- type: nauc_ndcg_at_20_std
value: 0.09407090543637946
- type: nauc_ndcg_at_3_diff1
value: 38.771875097129474
- type: nauc_ndcg_at_3_max
value: 26.88398760762366
- type: nauc_ndcg_at_3_std
value: -0.7925347887124169
- type: nauc_ndcg_at_5_diff1
value: 36.83295698854961
- type: nauc_ndcg_at_5_max
value: 26.254070953306602
- type: nauc_ndcg_at_5_std
value: -0.5384138224839687
- type: nauc_precision_at_1000_diff1
value: 3.830797202509721
- type: nauc_precision_at_1000_max
value: 11.845342201460761
- type: nauc_precision_at_1000_std
value: 9.148785863457954
- type: nauc_precision_at_100_diff1
value: 13.997075774954821
- type: nauc_precision_at_100_max
value: 21.8795221100872
- type: nauc_precision_at_100_std
value: 8.373324931296871
- type: nauc_precision_at_10_diff1
value: 22.14226604167402
- type: nauc_precision_at_10_max
value: 21.908333662820144
- type: nauc_precision_at_10_std
value: 2.023219601124639
- type: nauc_precision_at_1_diff1
value: 43.78241372037574
- type: nauc_precision_at_1_max
value: 29.62575208874629
- type: nauc_precision_at_1_std
value: -0.7403872780711277
- type: nauc_precision_at_20_diff1
value: 20.193510781013575
- type: nauc_precision_at_20_max
value: 21.47063363375231
- type: nauc_precision_at_20_std
value: 5.073093391207243
- type: nauc_precision_at_3_diff1
value: 33.320150724486965
- type: nauc_precision_at_3_max
value: 28.42063777288856
- type: nauc_precision_at_3_std
value: 1.3535730617388522
- type: nauc_precision_at_5_diff1
value: 26.972979755151126
- type: nauc_precision_at_5_max
value: 27.35114981308005
- type: nauc_precision_at_5_std
value: 1.5457768965552783
- type: nauc_recall_at_1000_diff1
value: 19.86231350512352
- type: nauc_recall_at_1000_max
value: 24.527676453832008
- type: nauc_recall_at_1000_std
value: 22.21772883429467
- type: nauc_recall_at_100_diff1
value: 23.132801377646004
- type: nauc_recall_at_100_max
value: 20.988835029134467
- type: nauc_recall_at_100_std
value: 8.793975445583824
- type: nauc_recall_at_10_diff1
value: 25.796766681233457
- type: nauc_recall_at_10_max
value: 17.634361086885264
- type: nauc_recall_at_10_std
value: -0.4776257668185774
- type: nauc_recall_at_1_diff1
value: 44.891250111700664
- type: nauc_recall_at_1_max
value: 27.415379429330695
- type: nauc_recall_at_1_std
value: -2.083016588225919
- type: nauc_recall_at_20_diff1
value: 25.714655008602115
- type: nauc_recall_at_20_max
value: 19.791963050086874
- type: nauc_recall_at_20_std
value: 1.9596491600238453
- type: nauc_recall_at_3_diff1
value: 34.63094367351514
- type: nauc_recall_at_3_max
value: 23.49028309758934
- type: nauc_recall_at_3_std
value: -0.8832533681499335
- type: nauc_recall_at_5_diff1
value: 30.296413916201175
- type: nauc_recall_at_5_max
value: 22.27559868081795
- type: nauc_recall_at_5_std
value: 0.7320693658757037
- type: ndcg_at_1
value: 21.887999999999998
- type: ndcg_at_10
value: 27.686
- type: ndcg_at_100
value: 31.363999999999997
- type: ndcg_at_1000
value: 34.605000000000004
- type: ndcg_at_20
value: 28.93
- type: ndcg_at_3
value: 24.576999999999998
- type: ndcg_at_5
value: 26.144000000000002
- type: precision_at_1
value: 21.887999999999998
- type: precision_at_10
value: 5.0360000000000005
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.135
- type: precision_at_20
value: 2.9690000000000003
- type: precision_at_3
value: 11.445
- type: precision_at_5
value: 8.269
- type: recall_at_1
value: 17.864
- type: recall_at_10
value: 34.977999999999994
- type: recall_at_100
value: 51.366
- type: recall_at_1000
value: 74.505
- type: recall_at_20
value: 39.587
- type: recall_at_3
value: 25.856
- type: recall_at_5
value: 30.215999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval (default)
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: main_score
value: 17.534
- type: map_at_1
value: 11.354000000000001
- type: map_at_10
value: 14.847
- type: map_at_100
value: 15.49
- type: map_at_1000
value: 15.588
- type: map_at_20
value: 15.17
- type: map_at_3
value: 13.501
- type: map_at_5
value: 14.221
- type: mrr_at_1
value: 14.26751592356688
- type: mrr_at_10
value: 18.05727428975836
- type: mrr_at_100
value: 18.690847238016758
- type: mrr_at_1000
value: 18.764726106731445
- type: mrr_at_20
value: 18.395670843598797
- type: mrr_at_3
value: 16.64543524416137
- type: mrr_at_5
value: 17.333333333333336
- type: nauc_map_at_1000_diff1
value: 43.301676769305494
- type: nauc_map_at_1000_max
value: 16.06805541449501
- type: nauc_map_at_1000_std
value: 12.507510564248166
- type: nauc_map_at_100_diff1
value: 43.34383366787733
- type: nauc_map_at_100_max
value: 16.049871088358675
- type: nauc_map_at_100_std
value: 12.45712935804974
- type: nauc_map_at_10_diff1
value: 43.688675805930785
- type: nauc_map_at_10_max
value: 16.41613903348705
- type: nauc_map_at_10_std
value: 12.219643122219239
- type: nauc_map_at_1_diff1
value: 50.609096395200005
- type: nauc_map_at_1_max
value: 18.78413464500168
- type: nauc_map_at_1_std
value: 10.90744028944332
- type: nauc_map_at_20_diff1
value: 43.49084704145287
- type: nauc_map_at_20_max
value: 16.182371186268703
- type: nauc_map_at_20_std
value: 12.299197289134225
- type: nauc_map_at_3_diff1
value: 45.751823982563266
- type: nauc_map_at_3_max
value: 17.192711563068457
- type: nauc_map_at_3_std
value: 11.16466159721384
- type: nauc_map_at_5_diff1
value: 44.53444696379338
- type: nauc_map_at_5_max
value: 16.559164547974103
- type: nauc_map_at_5_std
value: 11.928445405766698
- type: nauc_mrr_at_1000_diff1
value: 42.29550571785051
- type: nauc_mrr_at_1000_max
value: 15.642122643175679
- type: nauc_mrr_at_1000_std
value: 12.21491820640565
- type: nauc_mrr_at_100_diff1
value: 42.301744065140404
- type: nauc_mrr_at_100_max
value: 15.61733477074953
- type: nauc_mrr_at_100_std
value: 12.181221737579532
- type: nauc_mrr_at_10_diff1
value: 42.670586100296646
- type: nauc_mrr_at_10_max
value: 15.926109333510835
- type: nauc_mrr_at_10_std
value: 12.192068681943583
- type: nauc_mrr_at_1_diff1
value: 51.89198697276755
- type: nauc_mrr_at_1_max
value: 19.325504911863643
- type: nauc_mrr_at_1_std
value: 12.282190963023766
- type: nauc_mrr_at_20_diff1
value: 42.39065015069134
- type: nauc_mrr_at_20_max
value: 15.693533741719229
- type: nauc_mrr_at_20_std
value: 12.145452140370937
- type: nauc_mrr_at_3_diff1
value: 44.715851634047944
- type: nauc_mrr_at_3_max
value: 16.790849616314052
- type: nauc_mrr_at_3_std
value: 12.056098541376208
- type: nauc_mrr_at_5_diff1
value: 43.87033674228477
- type: nauc_mrr_at_5_max
value: 16.270118452872623
- type: nauc_mrr_at_5_std
value: 12.268005300025886
- type: nauc_ndcg_at_1000_diff1
value: 38.01640412131576
- type: nauc_ndcg_at_1000_max
value: 14.409491835566401
- type: nauc_ndcg_at_1000_std
value: 14.292607075384597
- type: nauc_ndcg_at_100_diff1
value: 38.57310899261012
- type: nauc_ndcg_at_100_max
value: 13.847832990597306
- type: nauc_ndcg_at_100_std
value: 13.318671226615844
- type: nauc_ndcg_at_10_diff1
value: 40.02384031953078
- type: nauc_ndcg_at_10_max
value: 15.18313865997875
- type: nauc_ndcg_at_10_std
value: 12.662598128357672
- type: nauc_ndcg_at_1_diff1
value: 51.89198697276755
- type: nauc_ndcg_at_1_max
value: 19.325504911863643
- type: nauc_ndcg_at_1_std
value: 12.282190963023766
- type: nauc_ndcg_at_20_diff1
value: 39.357302335202725
- type: nauc_ndcg_at_20_max
value: 14.497857343754966
- type: nauc_ndcg_at_20_std
value: 12.630113736826498
- type: nauc_ndcg_at_3_diff1
value: 43.58418967840297
- type: nauc_ndcg_at_3_max
value: 16.597491536723943
- type: nauc_ndcg_at_3_std
value: 11.650784883274328
- type: nauc_ndcg_at_5_diff1
value: 42.02130435072668
- type: nauc_ndcg_at_5_max
value: 15.627518090215247
- type: nauc_ndcg_at_5_std
value: 12.533489817270919
- type: nauc_precision_at_1000_diff1
value: 3.679521880714478
- type: nauc_precision_at_1000_max
value: 0.7919025640437954
- type: nauc_precision_at_1000_std
value: 11.047727940811521
- type: nauc_precision_at_100_diff1
value: 19.4078130462856
- type: nauc_precision_at_100_max
value: 4.3715506402771425
- type: nauc_precision_at_100_std
value: 16.956899011609643
- type: nauc_precision_at_10_diff1
value: 28.437045098011527
- type: nauc_precision_at_10_max
value: 11.734386703789056
- type: nauc_precision_at_10_std
value: 15.714063626213687
- type: nauc_precision_at_1_diff1
value: 51.89198697276755
- type: nauc_precision_at_1_max
value: 19.325504911863643
- type: nauc_precision_at_1_std
value: 12.282190963023766
- type: nauc_precision_at_20_diff1
value: 26.61622384998239
- type: nauc_precision_at_20_max
value: 9.031660188586937
- type: nauc_precision_at_20_std
value: 16.20337620782593
- type: nauc_precision_at_3_diff1
value: 38.065037328678045
- type: nauc_precision_at_3_max
value: 15.242914979757064
- type: nauc_precision_at_3_std
value: 13.448074137354654
- type: nauc_precision_at_5_diff1
value: 34.74896073477683
- type: nauc_precision_at_5_max
value: 13.347547367557508
- type: nauc_precision_at_5_std
value: 15.211527933339694
- type: nauc_recall_at_1000_diff1
value: 22.478800979463685
- type: nauc_recall_at_1000_max
value: 11.13145140021939
- type: nauc_recall_at_1000_std
value: 20.050008624461874
- type: nauc_recall_at_100_diff1
value: 25.988786568304555
- type: nauc_recall_at_100_max
value: 8.089785168176974
- type: nauc_recall_at_100_std
value: 14.262619130209112
- type: nauc_recall_at_10_diff1
value: 30.866722162291687
- type: nauc_recall_at_10_max
value: 12.14019760016012
- type: nauc_recall_at_10_std
value: 12.8097154636935
- type: nauc_recall_at_1_diff1
value: 50.609096395200005
- type: nauc_recall_at_1_max
value: 18.78413464500168
- type: nauc_recall_at_1_std
value: 10.90744028944332
- type: nauc_recall_at_20_diff1
value: 28.832935090203225
- type: nauc_recall_at_20_max
value: 10.309594281852648
- type: nauc_recall_at_20_std
value: 12.251157275647977
- type: nauc_recall_at_3_diff1
value: 40.105712098235315
- type: nauc_recall_at_3_max
value: 15.165723469178264
- type: nauc_recall_at_3_std
value: 10.99744165240917
- type: nauc_recall_at_5_diff1
value: 36.09241435581379
- type: nauc_recall_at_5_max
value: 13.032542349570054
- type: nauc_recall_at_5_std
value: 12.802627519053681
- type: ndcg_at_1
value: 14.268
- type: ndcg_at_10
value: 17.534
- type: ndcg_at_100
value: 20.78
- type: ndcg_at_1000
value: 23.526
- type: ndcg_at_20
value: 18.567
- type: ndcg_at_3
value: 15.218000000000002
- type: ndcg_at_5
value: 16.164
- type: precision_at_1
value: 14.268
- type: precision_at_10
value: 3.312
- type: precision_at_100
value: 0.603
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 1.9869999999999999
- type: precision_at_3
value: 7.219
- type: precision_at_5
value: 5.1209999999999996
- type: recall_at_1
value: 11.354000000000001
- type: recall_at_10
value: 22.511
- type: recall_at_100
value: 37.24
- type: recall_at_1000
value: 56.718
- type: recall_at_20
value: 26.362999999999996
- type: recall_at_3
value: 15.53
- type: recall_at_5
value: 18.322
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval (default)
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: main_score
value: 29.03
- type: map_at_1
value: 19.307
- type: map_at_10
value: 25.453
- type: map_at_100
value: 26.33
- type: map_at_1000
value: 26.419999999999998
- type: map_at_20
value: 25.896
- type: map_at_3
value: 23.572000000000003
- type: map_at_5
value: 24.694
- type: mrr_at_1
value: 22.00626959247649
- type: mrr_at_10
value: 27.87858884410605
- type: mrr_at_100
value: 28.652814969242712
- type: mrr_at_1000
value: 28.725946491824235
- type: mrr_at_20
value: 28.276271334002978
- type: mrr_at_3
value: 25.997910135841156
- type: mrr_at_5
value: 27.11703239289442
- type: nauc_map_at_1000_diff1
value: 43.50604073464055
- type: nauc_map_at_1000_max
value: 30.480004310005544
- type: nauc_map_at_1000_std
value: 0.18281635239684302
- type: nauc_map_at_100_diff1
value: 43.51057034900177
- type: nauc_map_at_100_max
value: 30.463453039114537
- type: nauc_map_at_100_std
value: 0.1392213813651391
- type: nauc_map_at_10_diff1
value: 43.680704548271024
- type: nauc_map_at_10_max
value: 30.639431323648626
- type: nauc_map_at_10_std
value: -0.17722097946115797
- type: nauc_map_at_1_diff1
value: 49.51121570705665
- type: nauc_map_at_1_max
value: 31.820851746100594
- type: nauc_map_at_1_std
value: -2.635315036488275
- type: nauc_map_at_20_diff1
value: 43.519636427140746
- type: nauc_map_at_20_max
value: 30.479309603785193
- type: nauc_map_at_20_std
value: -0.04034004401117608
- type: nauc_map_at_3_diff1
value: 44.660054248758726
- type: nauc_map_at_3_max
value: 30.35371167828995
- type: nauc_map_at_3_std
value: -1.4381463631334364
- type: nauc_map_at_5_diff1
value: 44.14458335553869
- type: nauc_map_at_5_max
value: 30.49464687257249
- type: nauc_map_at_5_std
value: -0.7069576298198817
- type: nauc_mrr_at_1000_diff1
value: 43.49091070845857
- type: nauc_mrr_at_1000_max
value: 30.904217260073207
- type: nauc_mrr_at_1000_std
value: 0.6030969099528762
- type: nauc_mrr_at_100_diff1
value: 43.48206732167152
- type: nauc_mrr_at_100_max
value: 30.885805566023013
- type: nauc_mrr_at_100_std
value: 0.5769328589498474
- type: nauc_mrr_at_10_diff1
value: 43.55457392824764
- type: nauc_mrr_at_10_max
value: 31.139789286663294
- type: nauc_mrr_at_10_std
value: 0.39137312166360116
- type: nauc_mrr_at_1_diff1
value: 49.7476817055079
- type: nauc_mrr_at_1_max
value: 33.35487810786589
- type: nauc_mrr_at_1_std
value: -2.335419312527886
- type: nauc_mrr_at_20_diff1
value: 43.48827825669483
- type: nauc_mrr_at_20_max
value: 30.983317516254566
- type: nauc_mrr_at_20_std
value: 0.4846694988872726
- type: nauc_mrr_at_3_diff1
value: 44.66661877146986
- type: nauc_mrr_at_3_max
value: 31.31121111690094
- type: nauc_mrr_at_3_std
value: -0.5970753554262374
- type: nauc_mrr_at_5_diff1
value: 44.05287141220467
- type: nauc_mrr_at_5_max
value: 31.185044083863524
- type: nauc_mrr_at_5_std
value: 0.03276041839131263
- type: nauc_ndcg_at_1000_diff1
value: 40.64648189672279
- type: nauc_ndcg_at_1000_max
value: 29.851206560241867
- type: nauc_ndcg_at_1000_std
value: 3.7885804314712423
- type: nauc_ndcg_at_100_diff1
value: 40.54660606744312
- type: nauc_ndcg_at_100_max
value: 29.52262097274987
- type: nauc_ndcg_at_100_std
value: 3.1313695052884087
- type: nauc_ndcg_at_10_diff1
value: 41.189151331147364
- type: nauc_ndcg_at_10_max
value: 30.257730735981376
- type: nauc_ndcg_at_10_std
value: 1.483283884208919
- type: nauc_ndcg_at_1_diff1
value: 49.7476817055079
- type: nauc_ndcg_at_1_max
value: 33.35487810786589
- type: nauc_ndcg_at_1_std
value: -2.335419312527886
- type: nauc_ndcg_at_20_diff1
value: 40.69940555374264
- type: nauc_ndcg_at_20_max
value: 29.67596434757782
- type: nauc_ndcg_at_20_std
value: 1.8670302698321029
- type: nauc_ndcg_at_3_diff1
value: 43.313981749068034
- type: nauc_ndcg_at_3_max
value: 29.92612987963682
- type: nauc_ndcg_at_3_std
value: -0.7629159307364975
- type: nauc_ndcg_at_5_diff1
value: 42.25367609444526
- type: nauc_ndcg_at_5_max
value: 30.011822025139217
- type: nauc_ndcg_at_5_std
value: 0.4228958959339596
- type: nauc_precision_at_1000_diff1
value: 6.294045364733051
- type: nauc_precision_at_1000_max
value: 13.003287301353916
- type: nauc_precision_at_1000_std
value: 19.672009407091075
- type: nauc_precision_at_100_diff1
value: 18.900847000430282
- type: nauc_precision_at_100_max
value: 19.89805341000471
- type: nauc_precision_at_100_std
value: 14.097381220216437
- type: nauc_precision_at_10_diff1
value: 32.019287482758315
- type: nauc_precision_at_10_max
value: 28.868719930088588
- type: nauc_precision_at_10_std
value: 7.067713684120723
- type: nauc_precision_at_1_diff1
value: 49.7476817055079
- type: nauc_precision_at_1_max
value: 33.35487810786589
- type: nauc_precision_at_1_std
value: -2.335419312527886
- type: nauc_precision_at_20_diff1
value: 27.442952211039866
- type: nauc_precision_at_20_max
value: 25.51570310142488
- type: nauc_precision_at_20_std
value: 8.001107746535538
- type: nauc_precision_at_3_diff1
value: 38.33881569586195
- type: nauc_precision_at_3_max
value: 28.995385801766826
- type: nauc_precision_at_3_std
value: 0.46426597601937036
- type: nauc_precision_at_5_diff1
value: 35.93052673151141
- type: nauc_precision_at_5_max
value: 28.77086703745561
- type: nauc_precision_at_5_std
value: 3.020792681159482
- type: nauc_recall_at_1000_diff1
value: 27.413733064523722
- type: nauc_recall_at_1000_max
value: 25.640071347285847
- type: nauc_recall_at_1000_std
value: 23.024726525628747
- type: nauc_recall_at_100_diff1
value: 30.238748775488382
- type: nauc_recall_at_100_max
value: 24.83445535706549
- type: nauc_recall_at_100_std
value: 13.213229148027994
- type: nauc_recall_at_10_diff1
value: 33.660824128432765
- type: nauc_recall_at_10_max
value: 28.239711759937826
- type: nauc_recall_at_10_std
value: 5.259078451819804
- type: nauc_recall_at_1_diff1
value: 49.51121570705665
- type: nauc_recall_at_1_max
value: 31.820851746100594
- type: nauc_recall_at_1_std
value: -2.635315036488275
- type: nauc_recall_at_20_diff1
value: 31.77661434800746
- type: nauc_recall_at_20_max
value: 25.949306594350592
- type: nauc_recall_at_20_std
value: 6.611875576453824
- type: nauc_recall_at_3_diff1
value: 39.16095910728281
- type: nauc_recall_at_3_max
value: 27.64955581506583
- type: nauc_recall_at_3_std
value: 0.10121363216139175
- type: nauc_recall_at_5_diff1
value: 36.32968291714543
- type: nauc_recall_at_5_max
value: 27.325678767283694
- type: nauc_recall_at_5_std
value: 2.653663972529844
- type: ndcg_at_1
value: 22.006
- type: ndcg_at_10
value: 29.03
- type: ndcg_at_100
value: 33.318999999999996
- type: ndcg_at_1000
value: 35.89
- type: ndcg_at_20
value: 30.503999999999998
- type: ndcg_at_3
value: 25.348
- type: ndcg_at_5
value: 27.267000000000003
- type: precision_at_1
value: 22.006
- type: precision_at_10
value: 4.627
- type: precision_at_100
value: 0.744
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_20
value: 2.702
- type: precision_at_3
value: 11.033999999999999
- type: precision_at_5
value: 7.861999999999999
- type: recall_at_1
value: 19.307
- type: recall_at_10
value: 37.624
- type: recall_at_100
value: 56.997
- type: recall_at_1000
value: 76.62299999999999
- type: recall_at_20
value: 43.086
- type: recall_at_3
value: 27.724
- type: recall_at_5
value: 32.421
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval (default)
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: main_score
value: 14.097000000000001
- type: map_at_1
value: 9.109
- type: map_at_10
value: 12.062000000000001
- type: map_at_100
value: 12.603
- type: map_at_1000
value: 12.690000000000001
- type: map_at_20
value: 12.335
- type: map_at_3
value: 10.882
- type: map_at_5
value: 11.445
- type: mrr_at_1
value: 9.6045197740113
- type: mrr_at_10
value: 13.001390009864586
- type: mrr_at_100
value: 13.541388076434767
- type: mrr_at_1000
value: 13.622995527273426
- type: mrr_at_20
value: 13.261213704134942
- type: mrr_at_3
value: 11.75141242937853
- type: mrr_at_5
value: 12.3728813559322
- type: nauc_map_at_1000_diff1
value: 41.25399941751793
- type: nauc_map_at_1000_max
value: 17.60637208770784
- type: nauc_map_at_1000_std
value: 3.8997877056955876
- type: nauc_map_at_100_diff1
value: 41.3047772590663
- type: nauc_map_at_100_max
value: 17.593792209003684
- type: nauc_map_at_100_std
value: 3.8624300256381883
- type: nauc_map_at_10_diff1
value: 41.918994248720736
- type: nauc_map_at_10_max
value: 17.523107069845093
- type: nauc_map_at_10_std
value: 3.3289332906481333
- type: nauc_map_at_1_diff1
value: 50.853111369434835
- type: nauc_map_at_1_max
value: 20.441039981572366
- type: nauc_map_at_1_std
value: 2.9730312951046747
- type: nauc_map_at_20_diff1
value: 41.676967823092156
- type: nauc_map_at_20_max
value: 17.611142954564
- type: nauc_map_at_20_std
value: 3.7507161629892516
- type: nauc_map_at_3_diff1
value: 45.15865999101332
- type: nauc_map_at_3_max
value: 17.51828209554345
- type: nauc_map_at_3_std
value: 3.125254352308741
- type: nauc_map_at_5_diff1
value: 43.518873099840164
- type: nauc_map_at_5_max
value: 18.096843812930256
- type: nauc_map_at_5_std
value: 3.501264664850646
- type: nauc_mrr_at_1000_diff1
value: 39.65049616843269
- type: nauc_mrr_at_1000_max
value: 18.992312109540187
- type: nauc_mrr_at_1000_std
value: 3.8630526743174602
- type: nauc_mrr_at_100_diff1
value: 39.67790321701619
- type: nauc_mrr_at_100_max
value: 18.99280796073833
- type: nauc_mrr_at_100_std
value: 3.831281556686595
- type: nauc_mrr_at_10_diff1
value: 40.40664164207995
- type: nauc_mrr_at_10_max
value: 18.9789911833429
- type: nauc_mrr_at_10_std
value: 3.389250639709206
- type: nauc_mrr_at_1_diff1
value: 48.90268334274423
- type: nauc_mrr_at_1_max
value: 22.148416208142038
- type: nauc_mrr_at_1_std
value: 3.482278486678414
- type: nauc_mrr_at_20_diff1
value: 40.12944011033672
- type: nauc_mrr_at_20_max
value: 19.01229852858854
- type: nauc_mrr_at_20_std
value: 3.721020072685762
- type: nauc_mrr_at_3_diff1
value: 43.53442474531623
- type: nauc_mrr_at_3_max
value: 18.98665230786941
- type: nauc_mrr_at_3_std
value: 3.141188860380207
- type: nauc_mrr_at_5_diff1
value: 41.792381222269306
- type: nauc_mrr_at_5_max
value: 19.564109785495027
- type: nauc_mrr_at_5_std
value: 3.447599289829289
- type: nauc_ndcg_at_1000_diff1
value: 33.75036088168543
- type: nauc_ndcg_at_1000_max
value: 17.552395174719724
- type: nauc_ndcg_at_1000_std
value: 6.019653809238646
- type: nauc_ndcg_at_100_diff1
value: 34.46011549407109
- type: nauc_ndcg_at_100_max
value: 17.261093331357706
- type: nauc_ndcg_at_100_std
value: 5.4268706575162104
- type: nauc_ndcg_at_10_diff1
value: 37.83747527779143
- type: nauc_ndcg_at_10_max
value: 17.044974102007092
- type: nauc_ndcg_at_10_std
value: 3.5111959818349603
- type: nauc_ndcg_at_1_diff1
value: 48.90268334274423
- type: nauc_ndcg_at_1_max
value: 22.148416208142038
- type: nauc_ndcg_at_1_std
value: 3.482278486678414
- type: nauc_ndcg_at_20_diff1
value: 37.138695182061525
- type: nauc_ndcg_at_20_max
value: 17.22387592023126
- type: nauc_ndcg_at_20_std
value: 4.770921048488158
- type: nauc_ndcg_at_3_diff1
value: 43.268967346255074
- type: nauc_ndcg_at_3_max
value: 17.20602008989898
- type: nauc_ndcg_at_3_std
value: 3.19589477459749
- type: nauc_ndcg_at_5_diff1
value: 40.7884752761726
- type: nauc_ndcg_at_5_max
value: 18.121892702668045
- type: nauc_ndcg_at_5_std
value: 3.8369089974368573
- type: nauc_precision_at_1000_diff1
value: 7.089909563758634
- type: nauc_precision_at_1000_max
value: 19.071511820051107
- type: nauc_precision_at_1000_std
value: 8.71710715708378
- type: nauc_precision_at_100_diff1
value: 17.577598014207858
- type: nauc_precision_at_100_max
value: 18.757305391811315
- type: nauc_precision_at_100_std
value: 8.571496733416154
- type: nauc_precision_at_10_diff1
value: 28.943153297767832
- type: nauc_precision_at_10_max
value: 16.38624587520458
- type: nauc_precision_at_10_std
value: 3.437574061625469
- type: nauc_precision_at_1_diff1
value: 48.90268334274423
- type: nauc_precision_at_1_max
value: 22.148416208142038
- type: nauc_precision_at_1_std
value: 3.482278486678414
- type: nauc_precision_at_20_diff1
value: 26.474908278743044
- type: nauc_precision_at_20_max
value: 16.47527151110289
- type: nauc_precision_at_20_std
value: 7.5305698853598
- type: nauc_precision_at_3_diff1
value: 39.54288018891221
- type: nauc_precision_at_3_max
value: 17.284449255178835
- type: nauc_precision_at_3_std
value: 2.8714843759024866
- type: nauc_precision_at_5_diff1
value: 34.480901699228006
- type: nauc_precision_at_5_max
value: 19.44159427138771
- type: nauc_precision_at_5_std
value: 3.9140233563987525
- type: nauc_recall_at_1000_diff1
value: 14.656193188687894
- type: nauc_recall_at_1000_max
value: 15.810571367218888
- type: nauc_recall_at_1000_std
value: 12.334573972835202
- type: nauc_recall_at_100_diff1
value: 18.594617672285707
- type: nauc_recall_at_100_max
value: 15.15863525459292
- type: nauc_recall_at_100_std
value: 9.115505114921058
- type: nauc_recall_at_10_diff1
value: 29.13269929764077
- type: nauc_recall_at_10_max
value: 15.059218016523301
- type: nauc_recall_at_10_std
value: 3.7696923586295137
- type: nauc_recall_at_1_diff1
value: 50.853111369434835
- type: nauc_recall_at_1_max
value: 20.441039981572366
- type: nauc_recall_at_1_std
value: 2.9730312951046747
- type: nauc_recall_at_20_diff1
value: 27.544653538434776
- type: nauc_recall_at_20_max
value: 15.420518066694445
- type: nauc_recall_at_20_std
value: 7.101778539671523
- type: nauc_recall_at_3_diff1
value: 40.00397565193035
- type: nauc_recall_at_3_max
value: 14.717415584208013
- type: nauc_recall_at_3_std
value: 3.658957442260116
- type: nauc_recall_at_5_diff1
value: 35.35853159550963
- type: nauc_recall_at_5_max
value: 17.049909921279315
- type: nauc_recall_at_5_std
value: 4.839540342554651
- type: ndcg_at_1
value: 9.605
- type: ndcg_at_10
value: 14.097000000000001
- type: ndcg_at_100
value: 17.098
- type: ndcg_at_1000
value: 19.948
- type: ndcg_at_20
value: 15.043999999999999
- type: ndcg_at_3
value: 11.683
- type: ndcg_at_5
value: 12.656999999999998
- type: precision_at_1
value: 9.605
- type: precision_at_10
value: 2.215
- type: precision_at_100
value: 0.395
- type: precision_at_1000
value: 0.068
- type: precision_at_20
value: 1.322
- type: precision_at_3
value: 4.859
- type: precision_at_5
value: 3.435
- type: recall_at_1
value: 9.109
- type: recall_at_10
value: 19.618
- type: recall_at_100
value: 34.056
- type: recall_at_1000
value: 56.75599999999999
- type: recall_at_20
value: 23.168
- type: recall_at_3
value: 12.982
- type: recall_at_5
value: 15.315000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval (default)
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: main_score
value: 8.895
- type: map_at_1
value: 4.444
- type: map_at_10
value: 6.789000000000001
- type: map_at_100
value: 7.362
- type: map_at_1000
value: 7.455
- type: map_at_20
value: 7.112
- type: map_at_3
value: 5.819
- type: map_at_5
value: 6.237
- type: mrr_at_1
value: 5.970149253731343
- type: mrr_at_10
value: 8.807500197425577
- type: mrr_at_100
value: 9.458867441952432
- type: mrr_at_1000
value: 9.550029897135536
- type: mrr_at_20
value: 9.191142267117858
- type: mrr_at_3
value: 7.669983416252076
- type: mrr_at_5
value: 8.229684908789391
- type: nauc_map_at_1000_diff1
value: 14.923575664521396
- type: nauc_map_at_1000_max
value: 14.637382629018258
- type: nauc_map_at_1000_std
value: 7.583317007693739
- type: nauc_map_at_100_diff1
value: 14.914938787317187
- type: nauc_map_at_100_max
value: 14.57831256590049
- type: nauc_map_at_100_std
value: 7.481458525605025
- type: nauc_map_at_10_diff1
value: 15.009158630868363
- type: nauc_map_at_10_max
value: 14.587168521042992
- type: nauc_map_at_10_std
value: 6.30675561821182
- type: nauc_map_at_1_diff1
value: 23.073067396533048
- type: nauc_map_at_1_max
value: 22.526518534617583
- type: nauc_map_at_1_std
value: 3.2886460233623356
- type: nauc_map_at_20_diff1
value: 14.55856812493529
- type: nauc_map_at_20_max
value: 14.445922336763791
- type: nauc_map_at_20_std
value: 7.0979435052536815
- type: nauc_map_at_3_diff1
value: 17.401011477759774
- type: nauc_map_at_3_max
value: 16.448773676590882
- type: nauc_map_at_3_std
value: 4.181405616554917
- type: nauc_map_at_5_diff1
value: 15.690380485853476
- type: nauc_map_at_5_max
value: 15.435047584962474
- type: nauc_map_at_5_std
value: 5.232971650136294
- type: nauc_mrr_at_1000_diff1
value: 15.064019296100401
- type: nauc_mrr_at_1000_max
value: 15.23275181655676
- type: nauc_mrr_at_1000_std
value: 6.62512228446261
- type: nauc_mrr_at_100_diff1
value: 15.04422899632206
- type: nauc_mrr_at_100_max
value: 15.180132969802102
- type: nauc_mrr_at_100_std
value: 6.569986365469756
- type: nauc_mrr_at_10_diff1
value: 15.513288408498664
- type: nauc_mrr_at_10_max
value: 15.639652887265692
- type: nauc_mrr_at_10_std
value: 6.08058172017529
- type: nauc_mrr_at_1_diff1
value: 23.174960802057807
- type: nauc_mrr_at_1_max
value: 23.10505027161953
- type: nauc_mrr_at_1_std
value: 5.000535690775217
- type: nauc_mrr_at_20_diff1
value: 14.944086344466943
- type: nauc_mrr_at_20_max
value: 15.058772912777219
- type: nauc_mrr_at_20_std
value: 6.406714993528487
- type: nauc_mrr_at_3_diff1
value: 16.945928540219413
- type: nauc_mrr_at_3_max
value: 16.999490982460667
- type: nauc_mrr_at_3_std
value: 4.2783371592240185
- type: nauc_mrr_at_5_diff1
value: 15.724845028203049
- type: nauc_mrr_at_5_max
value: 16.374268642724658
- type: nauc_mrr_at_5_std
value: 4.955417882432664
- type: nauc_ndcg_at_1000_diff1
value: 12.64441384439761
- type: nauc_ndcg_at_1000_max
value: 12.544144311249642
- type: nauc_ndcg_at_1000_std
value: 12.203401112537147
- type: nauc_ndcg_at_100_diff1
value: 12.856101621820079
- type: nauc_ndcg_at_100_max
value: 12.15851341921588
- type: nauc_ndcg_at_100_std
value: 11.352600283831114
- type: nauc_ndcg_at_10_diff1
value: 12.453755697243285
- type: nauc_ndcg_at_10_max
value: 11.750014509834587
- type: nauc_ndcg_at_10_std
value: 8.203127809929466
- type: nauc_ndcg_at_1_diff1
value: 23.174960802057807
- type: nauc_ndcg_at_1_max
value: 23.10505027161953
- type: nauc_ndcg_at_1_std
value: 5.000535690775217
- type: nauc_ndcg_at_20_diff1
value: 11.324071030247564
- type: nauc_ndcg_at_20_max
value: 11.094964112045453
- type: nauc_ndcg_at_20_std
value: 9.840879835834757
- type: nauc_ndcg_at_3_diff1
value: 15.323525692434862
- type: nauc_ndcg_at_3_max
value: 14.559998492898632
- type: nauc_ndcg_at_3_std
value: 4.027895180138566
- type: nauc_ndcg_at_5_diff1
value: 13.165086940669635
- type: nauc_ndcg_at_5_max
value: 13.32440977723948
- type: nauc_ndcg_at_5_std
value: 5.813837007263122
- type: nauc_precision_at_1000_diff1
value: 0.8928955587806005
- type: nauc_precision_at_1000_max
value: 4.446218508931589
- type: nauc_precision_at_1000_std
value: 5.877977195844953
- type: nauc_precision_at_100_diff1
value: 8.33525852681901
- type: nauc_precision_at_100_max
value: 7.830647914480539
- type: nauc_precision_at_100_std
value: 14.216797498501176
- type: nauc_precision_at_10_diff1
value: 7.765203936267145
- type: nauc_precision_at_10_max
value: 7.141939768201643
- type: nauc_precision_at_10_std
value: 9.60008810493683
- type: nauc_precision_at_1_diff1
value: 23.174960802057807
- type: nauc_precision_at_1_max
value: 23.10505027161953
- type: nauc_precision_at_1_std
value: 5.000535690775217
- type: nauc_precision_at_20_diff1
value: 4.810680914106181
- type: nauc_precision_at_20_max
value: 4.6628595108449655
- type: nauc_precision_at_20_std
value: 12.601430694735827
- type: nauc_precision_at_3_diff1
value: 13.474943796383625
- type: nauc_precision_at_3_max
value: 11.709775106648399
- type: nauc_precision_at_3_std
value: 3.207743252795555
- type: nauc_precision_at_5_diff1
value: 9.95810736829039
- type: nauc_precision_at_5_max
value: 10.456953224514239
- type: nauc_precision_at_5_std
value: 5.623208634930042
- type: nauc_recall_at_1000_diff1
value: 9.834451295472817
- type: nauc_recall_at_1000_max
value: 9.848949382055148
- type: nauc_recall_at_1000_std
value: 20.975606313150834
- type: nauc_recall_at_100_diff1
value: 10.217335772749356
- type: nauc_recall_at_100_max
value: 9.152943313782552
- type: nauc_recall_at_100_std
value: 17.31335628449071
- type: nauc_recall_at_10_diff1
value: 7.002474541545711
- type: nauc_recall_at_10_max
value: 5.600453872340962
- type: nauc_recall_at_10_std
value: 11.697537334063615
- type: nauc_recall_at_1_diff1
value: 23.073067396533048
- type: nauc_recall_at_1_max
value: 22.526518534617583
- type: nauc_recall_at_1_std
value: 3.2886460233623356
- type: nauc_recall_at_20_diff1
value: 5.418370604760854
- type: nauc_recall_at_20_max
value: 5.4952006102593085
- type: nauc_recall_at_20_std
value: 14.413914588580981
- type: nauc_recall_at_3_diff1
value: 12.321251599365478
- type: nauc_recall_at_3_max
value: 10.062822926598114
- type: nauc_recall_at_3_std
value: 5.2675756103944735
- type: nauc_recall_at_5_diff1
value: 7.540388296514483
- type: nauc_recall_at_5_max
value: 7.803110889019699
- type: nauc_recall_at_5_std
value: 8.317325637513246
- type: ndcg_at_1
value: 5.970000000000001
- type: ndcg_at_10
value: 8.895
- type: ndcg_at_100
value: 11.964
- type: ndcg_at_1000
value: 14.860000000000001
- type: ndcg_at_20
value: 10.104000000000001
- type: ndcg_at_3
value: 6.859999999999999
- type: ndcg_at_5
value: 7.573
- type: precision_at_1
value: 5.970000000000001
- type: precision_at_10
value: 1.779
- type: precision_at_100
value: 0.384
- type: precision_at_1000
value: 0.073
- type: precision_at_20
value: 1.2189999999999999
- type: precision_at_3
value: 3.4000000000000004
- type: precision_at_5
value: 2.537
- type: recall_at_1
value: 4.444
- type: recall_at_10
value: 13.751
- type: recall_at_100
value: 27.537
- type: recall_at_1000
value: 49.079
- type: recall_at_20
value: 18.182000000000002
- type: recall_at_3
value: 7.731000000000001
- type: recall_at_5
value: 9.636
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval (default)
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: main_score
value: 19.902
- type: map_at_1
value: 12.928999999999998
- type: map_at_10
value: 16.833000000000002
- type: map_at_100
value: 17.615
- type: map_at_1000
value: 17.732
- type: map_at_20
value: 17.207
- type: map_at_3
value: 15.463
- type: map_at_5
value: 16.128999999999998
- type: mrr_at_1
value: 15.976900866217516
- type: mrr_at_10
value: 20.444757627144526
- type: mrr_at_100
value: 21.18213748325402
- type: mrr_at_1000
value: 21.25972081056743
- type: mrr_at_20
value: 20.799603260475223
- type: mrr_at_3
value: 18.928456849534818
- type: mrr_at_5
value: 19.72248957330767
- type: nauc_map_at_1000_diff1
value: 41.27196577011274
- type: nauc_map_at_1000_max
value: 30.04254002251132
- type: nauc_map_at_1000_std
value: 6.570333369920046
- type: nauc_map_at_100_diff1
value: 41.27551384135304
- type: nauc_map_at_100_max
value: 29.99043897557097
- type: nauc_map_at_100_std
value: 6.472408363055328
- type: nauc_map_at_10_diff1
value: 41.85444301121017
- type: nauc_map_at_10_max
value: 29.81212191843452
- type: nauc_map_at_10_std
value: 5.93398567449617
- type: nauc_map_at_1_diff1
value: 46.839384517121886
- type: nauc_map_at_1_max
value: 33.10314951759653
- type: nauc_map_at_1_std
value: 3.473962823858065
- type: nauc_map_at_20_diff1
value: 41.4328465682072
- type: nauc_map_at_20_max
value: 29.97742898678745
- type: nauc_map_at_20_std
value: 6.104796006386177
- type: nauc_map_at_3_diff1
value: 43.02691416463743
- type: nauc_map_at_3_max
value: 30.42366456898119
- type: nauc_map_at_3_std
value: 5.155164523235761
- type: nauc_map_at_5_diff1
value: 42.50855309235288
- type: nauc_map_at_5_max
value: 30.268005050849005
- type: nauc_map_at_5_std
value: 5.5087675809592955
- type: nauc_mrr_at_1000_diff1
value: 39.918304151052496
- type: nauc_mrr_at_1000_max
value: 32.3633242335842
- type: nauc_mrr_at_1000_std
value: 9.821534513339788
- type: nauc_mrr_at_100_diff1
value: 39.88894200397407
- type: nauc_mrr_at_100_max
value: 32.35005140436353
- type: nauc_mrr_at_100_std
value: 9.798405855994671
- type: nauc_mrr_at_10_diff1
value: 40.398911825307096
- type: nauc_mrr_at_10_max
value: 32.431125056382164
- type: nauc_mrr_at_10_std
value: 9.607804963814376
- type: nauc_mrr_at_1_diff1
value: 44.710224260402306
- type: nauc_mrr_at_1_max
value: 34.810999361965784
- type: nauc_mrr_at_1_std
value: 6.666781318158904
- type: nauc_mrr_at_20_diff1
value: 40.00961756059491
- type: nauc_mrr_at_20_max
value: 32.37658164628154
- type: nauc_mrr_at_20_std
value: 9.668733699272558
- type: nauc_mrr_at_3_diff1
value: 41.57115214419929
- type: nauc_mrr_at_3_max
value: 32.68793918495075
- type: nauc_mrr_at_3_std
value: 9.040233893300375
- type: nauc_mrr_at_5_diff1
value: 41.06814071330848
- type: nauc_mrr_at_5_max
value: 32.8245640568574
- type: nauc_mrr_at_5_std
value: 9.58857119627648
- type: nauc_ndcg_at_1000_diff1
value: 36.80739838454769
- type: nauc_ndcg_at_1000_max
value: 29.789668331458618
- type: nauc_ndcg_at_1000_std
value: 11.39764916900706
- type: nauc_ndcg_at_100_diff1
value: 37.11213770959871
- type: nauc_ndcg_at_100_max
value: 29.081591038980903
- type: nauc_ndcg_at_100_std
value: 10.108782506088897
- type: nauc_ndcg_at_10_diff1
value: 39.5849935712723
- type: nauc_ndcg_at_10_max
value: 28.96898719826389
- type: nauc_ndcg_at_10_std
value: 7.961681263212508
- type: nauc_ndcg_at_1_diff1
value: 44.710224260402306
- type: nauc_ndcg_at_1_max
value: 34.810999361965784
- type: nauc_ndcg_at_1_std
value: 6.666781318158904
- type: nauc_ndcg_at_20_diff1
value: 38.12032626231077
- type: nauc_ndcg_at_20_max
value: 29.18302919363044
- type: nauc_ndcg_at_20_std
value: 8.263802202822081
- type: nauc_ndcg_at_3_diff1
value: 41.69966283174317
- type: nauc_ndcg_at_3_max
value: 30.929246645213066
- type: nauc_ndcg_at_3_std
value: 7.216761468782046
- type: nauc_ndcg_at_5_diff1
value: 41.01584530945962
- type: nauc_ndcg_at_5_max
value: 30.289879950898214
- type: nauc_ndcg_at_5_std
value: 7.4367837578277936
- type: nauc_precision_at_1000_diff1
value: 5.296272992814253
- type: nauc_precision_at_1000_max
value: 19.76310705995752
- type: nauc_precision_at_1000_std
value: 24.704985621130156
- type: nauc_precision_at_100_diff1
value: 16.46333749868499
- type: nauc_precision_at_100_max
value: 26.043739871376527
- type: nauc_precision_at_100_std
value: 26.092651162394155
- type: nauc_precision_at_10_diff1
value: 30.365327315976653
- type: nauc_precision_at_10_max
value: 28.924585920344946
- type: nauc_precision_at_10_std
value: 17.70407674779879
- type: nauc_precision_at_1_diff1
value: 44.710224260402306
- type: nauc_precision_at_1_max
value: 34.810999361965784
- type: nauc_precision_at_1_std
value: 6.666781318158904
- type: nauc_precision_at_20_diff1
value: 24.315922316558428
- type: nauc_precision_at_20_max
value: 28.874260987195967
- type: nauc_precision_at_20_std
value: 19.72374746122734
- type: nauc_precision_at_3_diff1
value: 37.37798681409137
- type: nauc_precision_at_3_max
value: 32.308460896865824
- type: nauc_precision_at_3_std
value: 12.279945415003562
- type: nauc_precision_at_5_diff1
value: 35.30318091103882
- type: nauc_precision_at_5_max
value: 31.820548127213062
- type: nauc_precision_at_5_std
value: 14.503599559616163
- type: nauc_recall_at_1000_diff1
value: 19.795948815823216
- type: nauc_recall_at_1000_max
value: 24.278386660959896
- type: nauc_recall_at_1000_std
value: 22.837222421253944
- type: nauc_recall_at_100_diff1
value: 24.472612415292573
- type: nauc_recall_at_100_max
value: 21.91143710710276
- type: nauc_recall_at_100_std
value: 15.053133349737896
- type: nauc_recall_at_10_diff1
value: 33.4020176737161
- type: nauc_recall_at_10_max
value: 23.033614175897377
- type: nauc_recall_at_10_std
value: 8.767203112156356
- type: nauc_recall_at_1_diff1
value: 46.839384517121886
- type: nauc_recall_at_1_max
value: 33.10314951759653
- type: nauc_recall_at_1_std
value: 3.473962823858065
- type: nauc_recall_at_20_diff1
value: 28.830072771517113
- type: nauc_recall_at_20_max
value: 23.489066180696092
- type: nauc_recall_at_20_std
value: 9.12579757868168
- type: nauc_recall_at_3_diff1
value: 39.908834198934215
- type: nauc_recall_at_3_max
value: 27.068809545101175
- type: nauc_recall_at_3_std
value: 6.530892914334164
- type: nauc_recall_at_5_diff1
value: 37.48709101560424
- type: nauc_recall_at_5_max
value: 26.081573648351025
- type: nauc_recall_at_5_std
value: 7.183952029055236
- type: ndcg_at_1
value: 15.977
- type: ndcg_at_10
value: 19.902
- type: ndcg_at_100
value: 24.086
- type: ndcg_at_1000
value: 27.01
- type: ndcg_at_20
value: 21.175
- type: ndcg_at_3
value: 17.330000000000002
- type: ndcg_at_5
value: 18.342
- type: precision_at_1
value: 15.977
- type: precision_at_10
value: 3.542
- type: precision_at_100
value: 0.679
- type: precision_at_1000
value: 0.109
- type: precision_at_20
value: 2.161
- type: precision_at_3
value: 8.053
- type: precision_at_5
value: 5.679
- type: recall_at_1
value: 12.928999999999998
- type: recall_at_10
value: 25.916
- type: recall_at_100
value: 44.836
- type: recall_at_1000
value: 65.22200000000001
- type: recall_at_20
value: 30.493
- type: recall_at_3
value: 18.241
- type: recall_at_5
value: 21.078
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval (default)
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: main_score
value: 15.862000000000002
- type: map_at_1
value: 9.831
- type: map_at_10
value: 13.256
- type: map_at_100
value: 14.008000000000001
- type: map_at_1000
value: 14.113000000000001
- type: map_at_20
value: 13.636999999999999
- type: map_at_3
value: 11.814
- type: map_at_5
value: 12.583
- type: mrr_at_1
value: 11.757990867579908
- type: mrr_at_10
value: 15.494808654055237
- type: mrr_at_100
value: 16.291820589502283
- type: mrr_at_1000
value: 16.374533932974945
- type: mrr_at_20
value: 15.933671804388336
- type: mrr_at_3
value: 13.83181126331811
- type: mrr_at_5
value: 14.6765601217656
- type: nauc_map_at_1000_diff1
value: 33.93453741920144
- type: nauc_map_at_1000_max
value: 15.653730492995432
- type: nauc_map_at_1000_std
value: 7.8758696471921175
- type: nauc_map_at_100_diff1
value: 33.93938109119093
- type: nauc_map_at_100_max
value: 15.600263725191917
- type: nauc_map_at_100_std
value: 7.765619322590685
- type: nauc_map_at_10_diff1
value: 34.54464331832195
- type: nauc_map_at_10_max
value: 15.612792960561228
- type: nauc_map_at_10_std
value: 6.7557841221613915
- type: nauc_map_at_1_diff1
value: 40.25943612185486
- type: nauc_map_at_1_max
value: 17.181254846998176
- type: nauc_map_at_1_std
value: 4.311873998223975
- type: nauc_map_at_20_diff1
value: 34.286604224077294
- type: nauc_map_at_20_max
value: 15.557596686810724
- type: nauc_map_at_20_std
value: 7.278138397108883
- type: nauc_map_at_3_diff1
value: 36.73973255367738
- type: nauc_map_at_3_max
value: 16.83994296407283
- type: nauc_map_at_3_std
value: 6.223159115827186
- type: nauc_map_at_5_diff1
value: 35.141424690409735
- type: nauc_map_at_5_max
value: 15.992920926050328
- type: nauc_map_at_5_std
value: 6.351250600055855
- type: nauc_mrr_at_1000_diff1
value: 34.73310032530598
- type: nauc_mrr_at_1000_max
value: 19.015226556944313
- type: nauc_mrr_at_1000_std
value: 9.222546150737514
- type: nauc_mrr_at_100_diff1
value: 34.726753216593245
- type: nauc_mrr_at_100_max
value: 18.99769748963775
- type: nauc_mrr_at_100_std
value: 9.174113672327863
- type: nauc_mrr_at_10_diff1
value: 35.44871459634613
- type: nauc_mrr_at_10_max
value: 19.123376102993888
- type: nauc_mrr_at_10_std
value: 8.400683156036651
- type: nauc_mrr_at_1_diff1
value: 41.66420742315266
- type: nauc_mrr_at_1_max
value: 20.29699577568541
- type: nauc_mrr_at_1_std
value: 6.552893551004773
- type: nauc_mrr_at_20_diff1
value: 34.97080168567599
- type: nauc_mrr_at_20_max
value: 18.93820346421597
- type: nauc_mrr_at_20_std
value: 8.88369463529979
- type: nauc_mrr_at_3_diff1
value: 37.82881961939195
- type: nauc_mrr_at_3_max
value: 20.23353217486363
- type: nauc_mrr_at_3_std
value: 8.335430576995872
- type: nauc_mrr_at_5_diff1
value: 36.39194951225287
- type: nauc_mrr_at_5_max
value: 19.51895403281475
- type: nauc_mrr_at_5_std
value: 8.109986680725223
- type: nauc_ndcg_at_1000_diff1
value: 29.082397825054134
- type: nauc_ndcg_at_1000_max
value: 16.79542535678252
- type: nauc_ndcg_at_1000_std
value: 13.862883511514385
- type: nauc_ndcg_at_100_diff1
value: 29.052598252998568
- type: nauc_ndcg_at_100_max
value: 15.498427568714371
- type: nauc_ndcg_at_100_std
value: 11.726792940214132
- type: nauc_ndcg_at_10_diff1
value: 32.1345507923688
- type: nauc_ndcg_at_10_max
value: 15.522253057572243
- type: nauc_ndcg_at_10_std
value: 8.033462171395978
- type: nauc_ndcg_at_1_diff1
value: 41.66420742315266
- type: nauc_ndcg_at_1_max
value: 20.29699577568541
- type: nauc_ndcg_at_1_std
value: 6.552893551004773
- type: nauc_ndcg_at_20_diff1
value: 30.9118537718024
- type: nauc_ndcg_at_20_max
value: 15.015691320922405
- type: nauc_ndcg_at_20_std
value: 9.48348066099931
- type: nauc_ndcg_at_3_diff1
value: 36.00136268031041
- type: nauc_ndcg_at_3_max
value: 18.106666639494865
- type: nauc_ndcg_at_3_std
value: 7.641902435989431
- type: nauc_ndcg_at_5_diff1
value: 33.39201547133596
- type: nauc_ndcg_at_5_max
value: 16.476689691452638
- type: nauc_ndcg_at_5_std
value: 7.369674781372547
- type: nauc_precision_at_1000_diff1
value: 6.471252357066656
- type: nauc_precision_at_1000_max
value: 19.69714506243997
- type: nauc_precision_at_1000_std
value: 19.55604767049242
- type: nauc_precision_at_100_diff1
value: 14.901264085785481
- type: nauc_precision_at_100_max
value: 18.109459081509822
- type: nauc_precision_at_100_std
value: 21.114563137000474
- type: nauc_precision_at_10_diff1
value: 27.5518231119986
- type: nauc_precision_at_10_max
value: 15.967381663307059
- type: nauc_precision_at_10_std
value: 11.45892974481074
- type: nauc_precision_at_1_diff1
value: 41.66420742315266
- type: nauc_precision_at_1_max
value: 20.29699577568541
- type: nauc_precision_at_1_std
value: 6.552893551004773
- type: nauc_precision_at_20_diff1
value: 24.871167172495863
- type: nauc_precision_at_20_max
value: 16.035625528276007
- type: nauc_precision_at_20_std
value: 16.40037479366967
- type: nauc_precision_at_3_diff1
value: 35.34609472177138
- type: nauc_precision_at_3_max
value: 20.28057060245756
- type: nauc_precision_at_3_std
value: 9.58695451354911
- type: nauc_precision_at_5_diff1
value: 31.12453786882641
- type: nauc_precision_at_5_max
value: 17.714809323391766
- type: nauc_precision_at_5_std
value: 9.540687572068887
- type: nauc_recall_at_1000_diff1
value: 13.176944792680187
- type: nauc_recall_at_1000_max
value: 17.215938373520867
- type: nauc_recall_at_1000_std
value: 31.763351387419913
- type: nauc_recall_at_100_diff1
value: 15.598307875167269
- type: nauc_recall_at_100_max
value: 11.571312022801102
- type: nauc_recall_at_100_std
value: 18.72066053860531
- type: nauc_recall_at_10_diff1
value: 25.20073017671981
- type: nauc_recall_at_10_max
value: 12.05920538584769
- type: nauc_recall_at_10_std
value: 9.127287803525167
- type: nauc_recall_at_1_diff1
value: 40.25943612185486
- type: nauc_recall_at_1_max
value: 17.181254846998176
- type: nauc_recall_at_1_std
value: 4.311873998223975
- type: nauc_recall_at_20_diff1
value: 21.87476573323018
- type: nauc_recall_at_20_max
value: 10.324185189089619
- type: nauc_recall_at_20_std
value: 12.342028690096459
- type: nauc_recall_at_3_diff1
value: 32.78814063821437
- type: nauc_recall_at_3_max
value: 16.638784171801436
- type: nauc_recall_at_3_std
value: 8.529115114779637
- type: nauc_recall_at_5_diff1
value: 28.192900822422317
- type: nauc_recall_at_5_max
value: 13.974726351715857
- type: nauc_recall_at_5_std
value: 8.09305084632621
- type: ndcg_at_1
value: 11.758000000000001
- type: ndcg_at_10
value: 15.862000000000002
- type: ndcg_at_100
value: 19.949
- type: ndcg_at_1000
value: 22.917
- type: ndcg_at_20
value: 17.249
- type: ndcg_at_3
value: 12.992
- type: ndcg_at_5
value: 14.266000000000002
- type: precision_at_1
value: 11.758000000000001
- type: precision_at_10
value: 2.82
- type: precision_at_100
value: 0.575
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 1.7870000000000001
- type: precision_at_3
value: 5.822
- type: precision_at_5
value: 4.315
- type: recall_at_1
value: 9.831
- type: recall_at_10
value: 21.762999999999998
- type: recall_at_100
value: 40.207
- type: recall_at_1000
value: 61.635
- type: recall_at_20
value: 26.826
- type: recall_at_3
value: 13.969999999999999
- type: recall_at_5
value: 17.154
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval (default)
revision: CQADupstackRetrieval is a combined dataset
split: test
type: CQADupstackRetrieval
metrics:
- type: main_score
value: 17.016083333333334
- type: ndcg_at_10
value: 17.016083333333334
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval (default)
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: main_score
value: 11.457
- type: map_at_1
value: 6.798
- type: map_at_10
value: 9.513
- type: map_at_100
value: 10.11
- type: map_at_1000
value: 10.181999999999999
- type: map_at_20
value: 9.852
- type: map_at_3
value: 8.459999999999999
- type: map_at_5
value: 9.095
- type: mrr_at_1
value: 8.43558282208589
- type: mrr_at_10
value: 11.242818190670953
- type: mrr_at_100
value: 11.841115877888047
- type: mrr_at_1000
value: 11.910635997616325
- type: mrr_at_20
value: 11.596258015622588
- type: mrr_at_3
value: 10.122699386503067
- type: mrr_at_5
value: 10.782208588957056
- type: nauc_map_at_1000_diff1
value: 33.754657655521825
- type: nauc_map_at_1000_max
value: 20.457874599194977
- type: nauc_map_at_1000_std
value: 4.356173597738065
- type: nauc_map_at_100_diff1
value: 33.75222679569881
- type: nauc_map_at_100_max
value: 20.373956157972724
- type: nauc_map_at_100_std
value: 4.252302912475765
- type: nauc_map_at_10_diff1
value: 34.77872705587748
- type: nauc_map_at_10_max
value: 20.93118729929346
- type: nauc_map_at_10_std
value: 3.481910641472398
- type: nauc_map_at_1_diff1
value: 42.058523271621276
- type: nauc_map_at_1_max
value: 19.398661310678737
- type: nauc_map_at_1_std
value: -1.9329828695069966
- type: nauc_map_at_20_diff1
value: 34.32132356844234
- type: nauc_map_at_20_max
value: 20.836011847513134
- type: nauc_map_at_20_std
value: 3.410902073845993
- type: nauc_map_at_3_diff1
value: 36.8129992491477
- type: nauc_map_at_3_max
value: 21.49364083314497
- type: nauc_map_at_3_std
value: 2.8543672506917117
- type: nauc_map_at_5_diff1
value: 35.945765614409595
- type: nauc_map_at_5_max
value: 21.821959253251073
- type: nauc_map_at_5_std
value: 3.1795889661755754
- type: nauc_mrr_at_1000_diff1
value: 33.022280754336535
- type: nauc_mrr_at_1000_max
value: 20.31974398955361
- type: nauc_mrr_at_1000_std
value: 6.915574901994777
- type: nauc_mrr_at_100_diff1
value: 32.98012701377776
- type: nauc_mrr_at_100_max
value: 20.217936050257485
- type: nauc_mrr_at_100_std
value: 6.853368541174533
- type: nauc_mrr_at_10_diff1
value: 34.0521482962105
- type: nauc_mrr_at_10_max
value: 20.594837283745004
- type: nauc_mrr_at_10_std
value: 6.58219400975866
- type: nauc_mrr_at_1_diff1
value: 40.45214208803864
- type: nauc_mrr_at_1_max
value: 20.246074459121917
- type: nauc_mrr_at_1_std
value: 3.6861996527886007
- type: nauc_mrr_at_20_diff1
value: 33.40956751827326
- type: nauc_mrr_at_20_max
value: 20.570275995460932
- type: nauc_mrr_at_20_std
value: 6.243011136595918
- type: nauc_mrr_at_3_diff1
value: 36.31911031414795
- type: nauc_mrr_at_3_max
value: 21.695701449295836
- type: nauc_mrr_at_3_std
value: 6.71267279773233
- type: nauc_mrr_at_5_diff1
value: 35.13580430980389
- type: nauc_mrr_at_5_max
value: 21.723293067977693
- type: nauc_mrr_at_5_std
value: 6.269186070012771
- type: nauc_ndcg_at_1000_diff1
value: 26.716650512928574
- type: nauc_ndcg_at_1000_max
value: 18.323227051095493
- type: nauc_ndcg_at_1000_std
value: 10.182374858813544
- type: nauc_ndcg_at_100_diff1
value: 27.023329777242445
- type: nauc_ndcg_at_100_max
value: 17.4041094989256
- type: nauc_ndcg_at_100_std
value: 8.607201276878204
- type: nauc_ndcg_at_10_diff1
value: 31.921453307307818
- type: nauc_ndcg_at_10_max
value: 20.328563944294817
- type: nauc_ndcg_at_10_std
value: 5.531328567900397
- type: nauc_ndcg_at_1_diff1
value: 40.45214208803864
- type: nauc_ndcg_at_1_max
value: 20.246074459121917
- type: nauc_ndcg_at_1_std
value: 3.6861996527886007
- type: nauc_ndcg_at_20_diff1
value: 30.279986443553863
- type: nauc_ndcg_at_20_max
value: 20.274259234859194
- type: nauc_ndcg_at_20_std
value: 5.0661641286538925
- type: nauc_ndcg_at_3_diff1
value: 35.40139952163887
- type: nauc_ndcg_at_3_max
value: 21.8390120280498
- type: nauc_ndcg_at_3_std
value: 5.417193004461638
- type: nauc_ndcg_at_5_diff1
value: 34.323991615044044
- type: nauc_ndcg_at_5_max
value: 22.44454175298003
- type: nauc_ndcg_at_5_std
value: 5.058913656381477
- type: nauc_precision_at_1000_diff1
value: 8.13341460956022
- type: nauc_precision_at_1000_max
value: 13.380869610400731
- type: nauc_precision_at_1000_std
value: 25.77566088719011
- type: nauc_precision_at_100_diff1
value: 12.028198307574947
- type: nauc_precision_at_100_max
value: 9.99491259218647
- type: nauc_precision_at_100_std
value: 20.26038939641748
- type: nauc_precision_at_10_diff1
value: 25.497863066445802
- type: nauc_precision_at_10_max
value: 19.951934819022966
- type: nauc_precision_at_10_std
value: 13.029428588116488
- type: nauc_precision_at_1_diff1
value: 40.45214208803864
- type: nauc_precision_at_1_max
value: 20.246074459121917
- type: nauc_precision_at_1_std
value: 3.6861996527886007
- type: nauc_precision_at_20_diff1
value: 21.270433967723527
- type: nauc_precision_at_20_max
value: 20.20704051155486
- type: nauc_precision_at_20_std
value: 10.606697205011349
- type: nauc_precision_at_3_diff1
value: 34.304974107764636
- type: nauc_precision_at_3_max
value: 24.786027767206704
- type: nauc_precision_at_3_std
value: 12.919584289443248
- type: nauc_precision_at_5_diff1
value: 31.235010233089454
- type: nauc_precision_at_5_max
value: 25.888178221422027
- type: nauc_precision_at_5_std
value: 12.04974180403603
- type: nauc_recall_at_1000_diff1
value: 10.70347303527697
- type: nauc_recall_at_1000_max
value: 11.531776655259092
- type: nauc_recall_at_1000_std
value: 20.09518174937834
- type: nauc_recall_at_100_diff1
value: 12.277161162587646
- type: nauc_recall_at_100_max
value: 9.031651314357903
- type: nauc_recall_at_100_std
value: 14.946530478779566
- type: nauc_recall_at_10_diff1
value: 25.751282561301597
- type: nauc_recall_at_10_max
value: 18.410538940956624
- type: nauc_recall_at_10_std
value: 7.052566618916148
- type: nauc_recall_at_1_diff1
value: 42.058523271621276
- type: nauc_recall_at_1_max
value: 19.398661310678737
- type: nauc_recall_at_1_std
value: -1.9329828695069966
- type: nauc_recall_at_20_diff1
value: 21.876105916783473
- type: nauc_recall_at_20_max
value: 18.14029808306082
- type: nauc_recall_at_20_std
value: 5.721370338729993
- type: nauc_recall_at_3_diff1
value: 32.349105117433645
- type: nauc_recall_at_3_max
value: 22.475284730157217
- type: nauc_recall_at_3_std
value: 6.577737452085277
- type: nauc_recall_at_5_diff1
value: 30.45726437530916
- type: nauc_recall_at_5_max
value: 22.993204324458517
- type: nauc_recall_at_5_std
value: 6.237822274407502
- type: ndcg_at_1
value: 8.436
- type: ndcg_at_10
value: 11.457
- type: ndcg_at_100
value: 14.618
- type: ndcg_at_1000
value: 16.803
- type: ndcg_at_20
value: 12.67
- type: ndcg_at_3
value: 9.396
- type: ndcg_at_5
value: 10.458
- type: precision_at_1
value: 8.436
- type: precision_at_10
value: 2.025
- type: precision_at_100
value: 0.391
- type: precision_at_1000
value: 0.063
- type: precision_at_20
value: 1.304
- type: precision_at_3
value: 4.192
- type: precision_at_5
value: 3.221
- type: recall_at_1
value: 6.798
- type: recall_at_10
value: 15.878999999999998
- type: recall_at_100
value: 30.768
- type: recall_at_1000
value: 47.451
- type: recall_at_20
value: 20.466
- type: recall_at_3
value: 10.224
- type: recall_at_5
value: 12.881
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval (default)
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: main_score
value: 9.754999999999999
- type: map_at_1
value: 5.489999999999999
- type: map_at_10
value: 7.9350000000000005
- type: map_at_100
value: 8.376999999999999
- type: map_at_1000
value: 8.458
- type: map_at_20
value: 8.14
- type: map_at_3
value: 7.166
- type: map_at_5
value: 7.5840000000000005
- type: mrr_at_1
value: 7.054370268410186
- type: mrr_at_10
value: 9.948655764209787
- type: mrr_at_100
value: 10.44089540191581
- type: mrr_at_1000
value: 10.510808098620316
- type: mrr_at_20
value: 10.18549289814409
- type: mrr_at_3
value: 9.027299839412715
- type: mrr_at_5
value: 9.52626749254416
- type: nauc_map_at_1000_diff1
value: 32.76388527748132
- type: nauc_map_at_1000_max
value: 26.76472945437023
- type: nauc_map_at_1000_std
value: 5.076773141116664
- type: nauc_map_at_100_diff1
value: 32.84910041131489
- type: nauc_map_at_100_max
value: 26.776649275369763
- type: nauc_map_at_100_std
value: 4.982288267487467
- type: nauc_map_at_10_diff1
value: 33.69288297350157
- type: nauc_map_at_10_max
value: 27.030787162656093
- type: nauc_map_at_10_std
value: 4.319996549665479
- type: nauc_map_at_1_diff1
value: 45.07110295953283
- type: nauc_map_at_1_max
value: 31.183919870403624
- type: nauc_map_at_1_std
value: 3.2596636083232524
- type: nauc_map_at_20_diff1
value: 33.18385578478434
- type: nauc_map_at_20_max
value: 26.750880392311256
- type: nauc_map_at_20_std
value: 4.560028824060983
- type: nauc_map_at_3_diff1
value: 36.134060387060806
- type: nauc_map_at_3_max
value: 28.53718072767372
- type: nauc_map_at_3_std
value: 3.8039060416364054
- type: nauc_map_at_5_diff1
value: 34.85287692775015
- type: nauc_map_at_5_max
value: 27.89364342330856
- type: nauc_map_at_5_std
value: 4.119474259507159
- type: nauc_mrr_at_1000_diff1
value: 32.015809492076826
- type: nauc_mrr_at_1000_max
value: 27.431639711646994
- type: nauc_mrr_at_1000_std
value: 5.95554166485951
- type: nauc_mrr_at_100_diff1
value: 32.07039747646208
- type: nauc_mrr_at_100_max
value: 27.452847130237775
- type: nauc_mrr_at_100_std
value: 5.905310921828455
- type: nauc_mrr_at_10_diff1
value: 32.93108532798797
- type: nauc_mrr_at_10_max
value: 27.768472855609204
- type: nauc_mrr_at_10_std
value: 5.580104763303006
- type: nauc_mrr_at_1_diff1
value: 43.888408590108355
- type: nauc_mrr_at_1_max
value: 32.903967259484176
- type: nauc_mrr_at_1_std
value: 3.514629542175588
- type: nauc_mrr_at_20_diff1
value: 32.408176921975254
- type: nauc_mrr_at_20_max
value: 27.470576205679897
- type: nauc_mrr_at_20_std
value: 5.716181575723001
- type: nauc_mrr_at_3_diff1
value: 35.354655207362356
- type: nauc_mrr_at_3_max
value: 29.14309593167405
- type: nauc_mrr_at_3_std
value: 4.63189493416609
- type: nauc_mrr_at_5_diff1
value: 33.970622089384825
- type: nauc_mrr_at_5_max
value: 28.6239836688986
- type: nauc_mrr_at_5_std
value: 5.122010745650993
- type: nauc_ndcg_at_1000_diff1
value: 25.030181517448163
- type: nauc_ndcg_at_1000_max
value: 24.25419053775242
- type: nauc_ndcg_at_1000_std
value: 9.178235317241148
- type: nauc_ndcg_at_100_diff1
value: 26.546832760443966
- type: nauc_ndcg_at_100_max
value: 24.42201784253177
- type: nauc_ndcg_at_100_std
value: 7.9899910907634375
- type: nauc_ndcg_at_10_diff1
value: 29.856179532797423
- type: nauc_ndcg_at_10_max
value: 25.424197578846012
- type: nauc_ndcg_at_10_std
value: 5.1638300059562035
- type: nauc_ndcg_at_1_diff1
value: 43.888408590108355
- type: nauc_ndcg_at_1_max
value: 32.903967259484176
- type: nauc_ndcg_at_1_std
value: 3.514629542175588
- type: nauc_ndcg_at_20_diff1
value: 28.387788168718874
- type: nauc_ndcg_at_20_max
value: 24.54850515588615
- type: nauc_ndcg_at_20_std
value: 5.896669986261477
- type: nauc_ndcg_at_3_diff1
value: 34.072630397644424
- type: nauc_ndcg_at_3_max
value: 28.28910465749962
- type: nauc_ndcg_at_3_std
value: 4.108392335721374
- type: nauc_ndcg_at_5_diff1
value: 32.01123351290829
- type: nauc_ndcg_at_5_max
value: 27.245024254467303
- type: nauc_ndcg_at_5_std
value: 4.721870277645733
- type: nauc_precision_at_1000_diff1
value: 10.47217681263907
- type: nauc_precision_at_1000_max
value: 20.919793131324727
- type: nauc_precision_at_1000_std
value: 14.804007062294563
- type: nauc_precision_at_100_diff1
value: 16.685502515637722
- type: nauc_precision_at_100_max
value: 23.37373409901207
- type: nauc_precision_at_100_std
value: 13.953311698132442
- type: nauc_precision_at_10_diff1
value: 22.478790016325785
- type: nauc_precision_at_10_max
value: 23.607477242235102
- type: nauc_precision_at_10_std
value: 7.794068171304157
- type: nauc_precision_at_1_diff1
value: 43.888408590108355
- type: nauc_precision_at_1_max
value: 32.903967259484176
- type: nauc_precision_at_1_std
value: 3.514629542175588
- type: nauc_precision_at_20_diff1
value: 19.959179713421722
- type: nauc_precision_at_20_max
value: 21.738126842321893
- type: nauc_precision_at_20_std
value: 9.007914166096132
- type: nauc_precision_at_3_diff1
value: 29.984253127282134
- type: nauc_precision_at_3_max
value: 28.271022607772796
- type: nauc_precision_at_3_std
value: 5.620451575052563
- type: nauc_precision_at_5_diff1
value: 26.198401324939464
- type: nauc_precision_at_5_max
value: 26.593956126902786
- type: nauc_precision_at_5_std
value: 6.684705108310583
- type: nauc_recall_at_1000_diff1
value: 9.812234445343657
- type: nauc_recall_at_1000_max
value: 17.800710147129053
- type: nauc_recall_at_1000_std
value: 15.826278320231745
- type: nauc_recall_at_100_diff1
value: 14.586175748060896
- type: nauc_recall_at_100_max
value: 18.340956025066333
- type: nauc_recall_at_100_std
value: 12.791161727474043
- type: nauc_recall_at_10_diff1
value: 21.286255365948538
- type: nauc_recall_at_10_max
value: 20.04866550317387
- type: nauc_recall_at_10_std
value: 5.645106302785361
- type: nauc_recall_at_1_diff1
value: 45.07110295953283
- type: nauc_recall_at_1_max
value: 31.183919870403624
- type: nauc_recall_at_1_std
value: 3.2596636083232524
- type: nauc_recall_at_20_diff1
value: 18.757519729175094
- type: nauc_recall_at_20_max
value: 18.59809411356838
- type: nauc_recall_at_20_std
value: 7.482712453171494
- type: nauc_recall_at_3_diff1
value: 29.350550830882405
- type: nauc_recall_at_3_max
value: 26.26284543188125
- type: nauc_recall_at_3_std
value: 4.284032658092434
- type: nauc_recall_at_5_diff1
value: 25.247444183841345
- type: nauc_recall_at_5_max
value: 23.639030774195213
- type: nauc_recall_at_5_std
value: 5.05748857090612
- type: ndcg_at_1
value: 7.054
- type: ndcg_at_10
value: 9.754999999999999
- type: ndcg_at_100
value: 12.252
- type: ndcg_at_1000
value: 14.658999999999999
- type: ndcg_at_20
value: 10.508000000000001
- type: ndcg_at_3
value: 8.265
- type: ndcg_at_5
value: 8.929
- type: precision_at_1
value: 7.054
- type: precision_at_10
value: 1.807
- type: precision_at_100
value: 0.368
- type: precision_at_1000
value: 0.06899999999999999
- type: precision_at_20
value: 1.1199999999999999
- type: precision_at_3
value: 3.9690000000000003
- type: precision_at_5
value: 2.863
- type: recall_at_1
value: 5.489999999999999
- type: recall_at_10
value: 13.422
- type: recall_at_100
value: 24.962999999999997
- type: recall_at_1000
value: 42.725
- type: recall_at_20
value: 16.259
- type: recall_at_3
value: 9.155000000000001
- type: recall_at_5
value: 10.923
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval (default)
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: main_score
value: 16.884
- type: map_at_1
value: 11.259
- type: map_at_10
value: 14.371999999999998
- type: map_at_100
value: 14.921999999999999
- type: map_at_1000
value: 15.012
- type: map_at_20
value: 14.643
- type: map_at_3
value: 13.196
- type: map_at_5
value: 13.786000000000001
- type: mrr_at_1
value: 13.619402985074627
- type: mrr_at_10
value: 17.155739161336175
- type: mrr_at_100
value: 17.682382182436477
- type: mrr_at_1000
value: 17.762865075369113
- type: mrr_at_20
value: 17.394179616617638
- type: mrr_at_3
value: 15.951492537313436
- type: mrr_at_5
value: 16.497201492537318
- type: nauc_map_at_1000_diff1
value: 47.4265740975564
- type: nauc_map_at_1000_max
value: 28.882262726128438
- type: nauc_map_at_1000_std
value: 8.733456805684261
- type: nauc_map_at_100_diff1
value: 47.47182414534892
- type: nauc_map_at_100_max
value: 28.85824710228484
- type: nauc_map_at_100_std
value: 8.689373453465027
- type: nauc_map_at_10_diff1
value: 48.02651594284678
- type: nauc_map_at_10_max
value: 29.238822235344035
- type: nauc_map_at_10_std
value: 8.33007800978345
- type: nauc_map_at_1_diff1
value: 56.39452680423106
- type: nauc_map_at_1_max
value: 32.60008414160042
- type: nauc_map_at_1_std
value: 6.843961503288069
- type: nauc_map_at_20_diff1
value: 47.63901968476526
- type: nauc_map_at_20_max
value: 29.025324617088327
- type: nauc_map_at_20_std
value: 8.643210479120588
- type: nauc_map_at_3_diff1
value: 49.40628498975407
- type: nauc_map_at_3_max
value: 30.22948877331367
- type: nauc_map_at_3_std
value: 7.289154264399903
- type: nauc_map_at_5_diff1
value: 48.664130342694136
- type: nauc_map_at_5_max
value: 30.14327671294244
- type: nauc_map_at_5_std
value: 7.939333631753251
- type: nauc_mrr_at_1000_diff1
value: 44.58799837398294
- type: nauc_mrr_at_1000_max
value: 31.03541915705859
- type: nauc_mrr_at_1000_std
value: 10.403824515337941
- type: nauc_mrr_at_100_diff1
value: 44.601824537567715
- type: nauc_mrr_at_100_max
value: 31.02756566133194
- type: nauc_mrr_at_100_std
value: 10.374041246429492
- type: nauc_mrr_at_10_diff1
value: 45.08809081749144
- type: nauc_mrr_at_10_max
value: 31.57615351364963
- type: nauc_mrr_at_10_std
value: 10.29441865771061
- type: nauc_mrr_at_1_diff1
value: 53.78193049233505
- type: nauc_mrr_at_1_max
value: 35.795787308983364
- type: nauc_mrr_at_1_std
value: 9.700924818901061
- type: nauc_mrr_at_20_diff1
value: 44.74335182043816
- type: nauc_mrr_at_20_max
value: 31.18129900426782
- type: nauc_mrr_at_20_std
value: 10.385325054118825
- type: nauc_mrr_at_3_diff1
value: 46.73779708259278
- type: nauc_mrr_at_3_max
value: 32.65075209697959
- type: nauc_mrr_at_3_std
value: 9.728066031213869
- type: nauc_mrr_at_5_diff1
value: 45.92982408736637
- type: nauc_mrr_at_5_max
value: 32.467526279204826
- type: nauc_mrr_at_5_std
value: 9.989919602029717
- type: nauc_ndcg_at_1000_diff1
value: 40.92066479403982
- type: nauc_ndcg_at_1000_max
value: 26.324838581358712
- type: nauc_ndcg_at_1000_std
value: 11.523782722688093
- type: nauc_ndcg_at_100_diff1
value: 41.69901831802912
- type: nauc_ndcg_at_100_max
value: 26.05948550508969
- type: nauc_ndcg_at_100_std
value: 10.741879131890466
- type: nauc_ndcg_at_10_diff1
value: 43.984470289795006
- type: nauc_ndcg_at_10_max
value: 27.712165270383217
- type: nauc_ndcg_at_10_std
value: 9.664252780617716
- type: nauc_ndcg_at_1_diff1
value: 53.78193049233505
- type: nauc_ndcg_at_1_max
value: 35.795787308983364
- type: nauc_ndcg_at_1_std
value: 9.700924818901061
- type: nauc_ndcg_at_20_diff1
value: 42.87969088645589
- type: nauc_ndcg_at_20_max
value: 26.93508319676996
- type: nauc_ndcg_at_20_std
value: 10.383528785973736
- type: nauc_ndcg_at_3_diff1
value: 46.50711903290246
- type: nauc_ndcg_at_3_max
value: 30.119861670148136
- type: nauc_ndcg_at_3_std
value: 8.209698597192652
- type: nauc_ndcg_at_5_diff1
value: 45.5276661506903
- type: nauc_ndcg_at_5_max
value: 29.727216155363013
- type: nauc_ndcg_at_5_std
value: 8.969137019208551
- type: nauc_precision_at_1000_diff1
value: 13.186344514919291
- type: nauc_precision_at_1000_max
value: 14.081180493706894
- type: nauc_precision_at_1000_std
value: 13.331957277782028
- type: nauc_precision_at_100_diff1
value: 25.836947568988094
- type: nauc_precision_at_100_max
value: 19.399450264723857
- type: nauc_precision_at_100_std
value: 15.996979763079173
- type: nauc_precision_at_10_diff1
value: 31.611911937904136
- type: nauc_precision_at_10_max
value: 23.67106809118961
- type: nauc_precision_at_10_std
value: 12.494002491494403
- type: nauc_precision_at_1_diff1
value: 53.78193049233505
- type: nauc_precision_at_1_max
value: 35.795787308983364
- type: nauc_precision_at_1_std
value: 9.700924818901061
- type: nauc_precision_at_20_diff1
value: 28.52666886145722
- type: nauc_precision_at_20_max
value: 21.954240311035203
- type: nauc_precision_at_20_std
value: 14.844645388086807
- type: nauc_precision_at_3_diff1
value: 38.45498467923997
- type: nauc_precision_at_3_max
value: 29.266449529306882
- type: nauc_precision_at_3_std
value: 9.049210381929473
- type: nauc_precision_at_5_diff1
value: 36.09730656980118
- type: nauc_precision_at_5_max
value: 28.837127135797243
- type: nauc_precision_at_5_std
value: 11.158339114522931
- type: nauc_recall_at_1000_diff1
value: 21.260887713456125
- type: nauc_recall_at_1000_max
value: 16.113129212962036
- type: nauc_recall_at_1000_std
value: 18.480136835190926
- type: nauc_recall_at_100_diff1
value: 27.104482564680143
- type: nauc_recall_at_100_max
value: 15.992106261015381
- type: nauc_recall_at_100_std
value: 13.84189240491372
- type: nauc_recall_at_10_diff1
value: 35.07971219401454
- type: nauc_recall_at_10_max
value: 21.285398091407597
- type: nauc_recall_at_10_std
value: 11.2371939944325
- type: nauc_recall_at_1_diff1
value: 56.39452680423106
- type: nauc_recall_at_1_max
value: 32.60008414160042
- type: nauc_recall_at_1_std
value: 6.843961503288069
- type: nauc_recall_at_20_diff1
value: 32.39512106898805
- type: nauc_recall_at_20_max
value: 19.218626368924355
- type: nauc_recall_at_20_std
value: 12.883976865810729
- type: nauc_recall_at_3_diff1
value: 42.44181844531972
- type: nauc_recall_at_3_max
value: 26.878784537566723
- type: nauc_recall_at_3_std
value: 8.021682738108238
- type: nauc_recall_at_5_diff1
value: 39.71281577688504
- type: nauc_recall_at_5_max
value: 26.741868241320095
- type: nauc_recall_at_5_std
value: 9.776821004059626
- type: ndcg_at_1
value: 13.619
- type: ndcg_at_10
value: 16.884
- type: ndcg_at_100
value: 19.919999999999998
- type: ndcg_at_1000
value: 22.61
- type: ndcg_at_20
value: 17.802
- type: ndcg_at_3
value: 14.601
- type: ndcg_at_5
value: 15.47
- type: precision_at_1
value: 13.619
- type: precision_at_10
value: 2.8080000000000003
- type: precision_at_100
value: 0.485
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_20
value: 1.66
- type: precision_at_3
value: 6.468
- type: precision_at_5
value: 4.496
- type: recall_at_1
value: 11.259
- type: recall_at_10
value: 22.148
- type: recall_at_100
value: 36.338
- type: recall_at_1000
value: 56.37
- type: recall_at_20
value: 25.444
- type: recall_at_3
value: 15.601
- type: recall_at_5
value: 17.904999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval (default)
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: main_score
value: 18.986
- type: map_at_1
value: 11.219
- type: map_at_10
value: 15.572
- type: map_at_100
value: 16.496
- type: map_at_1000
value: 16.666
- type: map_at_20
value: 16.073999999999998
- type: map_at_3
value: 14.173
- type: map_at_5
value: 14.915000000000001
- type: mrr_at_1
value: 14.82213438735178
- type: mrr_at_10
value: 19.52365267582659
- type: mrr_at_100
value: 20.370290185635753
- type: mrr_at_1000
value: 20.467043542503724
- type: mrr_at_20
value: 20.0766545965337
- type: mrr_at_3
value: 18.21475625823452
- type: mrr_at_5
value: 18.945981554677203
- type: nauc_map_at_1000_diff1
value: 42.231943470301474
- type: nauc_map_at_1000_max
value: 26.47159454229298
- type: nauc_map_at_1000_std
value: 8.142899408562116
- type: nauc_map_at_100_diff1
value: 42.20734027834296
- type: nauc_map_at_100_max
value: 26.482392045352114
- type: nauc_map_at_100_std
value: 7.869302970334234
- type: nauc_map_at_10_diff1
value: 43.04836148095647
- type: nauc_map_at_10_max
value: 26.854456008820886
- type: nauc_map_at_10_std
value: 7.199117428761973
- type: nauc_map_at_1_diff1
value: 52.69584045825562
- type: nauc_map_at_1_max
value: 32.26169513753074
- type: nauc_map_at_1_std
value: 6.952498233745584
- type: nauc_map_at_20_diff1
value: 42.41625410983439
- type: nauc_map_at_20_max
value: 26.907750306130733
- type: nauc_map_at_20_std
value: 7.478967739706924
- type: nauc_map_at_3_diff1
value: 44.785788923058384
- type: nauc_map_at_3_max
value: 27.412957229850438
- type: nauc_map_at_3_std
value: 6.907258583517531
- type: nauc_map_at_5_diff1
value: 43.634053742171005
- type: nauc_map_at_5_max
value: 27.311414645244174
- type: nauc_map_at_5_std
value: 6.782368796408486
- type: nauc_mrr_at_1000_diff1
value: 40.121034147067355
- type: nauc_mrr_at_1000_max
value: 26.418816188019484
- type: nauc_mrr_at_1000_std
value: 11.036789931313589
- type: nauc_mrr_at_100_diff1
value: 40.09038771859193
- type: nauc_mrr_at_100_max
value: 26.35109915559335
- type: nauc_mrr_at_100_std
value: 11.004694419173386
- type: nauc_mrr_at_10_diff1
value: 40.70815905748883
- type: nauc_mrr_at_10_max
value: 26.39730116006313
- type: nauc_mrr_at_10_std
value: 10.795296410891202
- type: nauc_mrr_at_1_diff1
value: 49.49023740663914
- type: nauc_mrr_at_1_max
value: 32.80752877856241
- type: nauc_mrr_at_1_std
value: 9.182609293548452
- type: nauc_mrr_at_20_diff1
value: 40.09097766117321
- type: nauc_mrr_at_20_max
value: 26.543696500831608
- type: nauc_mrr_at_20_std
value: 11.045110550071236
- type: nauc_mrr_at_3_diff1
value: 42.547772290792786
- type: nauc_mrr_at_3_max
value: 27.248503683439974
- type: nauc_mrr_at_3_std
value: 11.12811144130018
- type: nauc_mrr_at_5_diff1
value: 41.182672458130945
- type: nauc_mrr_at_5_max
value: 27.204022967551346
- type: nauc_mrr_at_5_std
value: 10.736058227235059
- type: nauc_ndcg_at_1000_diff1
value: 38.283155226012525
- type: nauc_ndcg_at_1000_max
value: 23.952454186870728
- type: nauc_ndcg_at_1000_std
value: 11.202190633221258
- type: nauc_ndcg_at_100_diff1
value: 37.28326924063582
- type: nauc_ndcg_at_100_max
value: 23.059861557232345
- type: nauc_ndcg_at_100_std
value: 9.94550524440808
- type: nauc_ndcg_at_10_diff1
value: 39.63812221599438
- type: nauc_ndcg_at_10_max
value: 24.35015593369919
- type: nauc_ndcg_at_10_std
value: 9.315660164781054
- type: nauc_ndcg_at_1_diff1
value: 49.49023740663914
- type: nauc_ndcg_at_1_max
value: 32.80752877856241
- type: nauc_ndcg_at_1_std
value: 9.182609293548452
- type: nauc_ndcg_at_20_diff1
value: 37.63726489914318
- type: nauc_ndcg_at_20_max
value: 24.728684570593007
- type: nauc_ndcg_at_20_std
value: 9.986169134250208
- type: nauc_ndcg_at_3_diff1
value: 41.86142781421585
- type: nauc_ndcg_at_3_max
value: 25.373436332199645
- type: nauc_ndcg_at_3_std
value: 9.66682128586139
- type: nauc_ndcg_at_5_diff1
value: 40.642745287564594
- type: nauc_ndcg_at_5_max
value: 25.56873621658099
- type: nauc_ndcg_at_5_std
value: 9.25538178041856
- type: nauc_precision_at_1000_diff1
value: 11.480722649998393
- type: nauc_precision_at_1000_max
value: 1.8213948061833445
- type: nauc_precision_at_1000_std
value: 29.23515602956654
- type: nauc_precision_at_100_diff1
value: 14.18816101118032
- type: nauc_precision_at_100_max
value: 2.440318670740079
- type: nauc_precision_at_100_std
value: 29.24020499259622
- type: nauc_precision_at_10_diff1
value: 27.712287052106255
- type: nauc_precision_at_10_max
value: 16.786789482138776
- type: nauc_precision_at_10_std
value: 14.310510991471832
- type: nauc_precision_at_1_diff1
value: 49.49023740663914
- type: nauc_precision_at_1_max
value: 32.80752877856241
- type: nauc_precision_at_1_std
value: 9.182609293548452
- type: nauc_precision_at_20_diff1
value: 20.46872198920085
- type: nauc_precision_at_20_max
value: 14.825240542929851
- type: nauc_precision_at_20_std
value: 20.953665146043296
- type: nauc_precision_at_3_diff1
value: 36.03554983971536
- type: nauc_precision_at_3_max
value: 21.854122073954194
- type: nauc_precision_at_3_std
value: 13.04509621136731
- type: nauc_precision_at_5_diff1
value: 32.79763412951098
- type: nauc_precision_at_5_max
value: 21.11796990161242
- type: nauc_precision_at_5_std
value: 13.431327120495338
- type: nauc_recall_at_1000_diff1
value: 30.09802696990947
- type: nauc_recall_at_1000_max
value: 13.40584644567289
- type: nauc_recall_at_1000_std
value: 16.521370765894975
- type: nauc_recall_at_100_diff1
value: 26.309114191114602
- type: nauc_recall_at_100_max
value: 13.350873360428366
- type: nauc_recall_at_100_std
value: 11.078547445094047
- type: nauc_recall_at_10_diff1
value: 31.32014394352729
- type: nauc_recall_at_10_max
value: 18.345182060137695
- type: nauc_recall_at_10_std
value: 9.128692650287276
- type: nauc_recall_at_1_diff1
value: 52.69584045825562
- type: nauc_recall_at_1_max
value: 32.26169513753074
- type: nauc_recall_at_1_std
value: 6.952498233745584
- type: nauc_recall_at_20_diff1
value: 25.40389262415684
- type: nauc_recall_at_20_max
value: 19.21175870928344
- type: nauc_recall_at_20_std
value: 10.924171074066592
- type: nauc_recall_at_3_diff1
value: 38.07498529415478
- type: nauc_recall_at_3_max
value: 21.675031784523334
- type: nauc_recall_at_3_std
value: 7.885136540556627
- type: nauc_recall_at_5_diff1
value: 33.03739602855325
- type: nauc_recall_at_5_max
value: 20.891017025098222
- type: nauc_recall_at_5_std
value: 7.259719761129051
- type: ndcg_at_1
value: 14.822
- type: ndcg_at_10
value: 18.986
- type: ndcg_at_100
value: 22.996
- type: ndcg_at_1000
value: 26.569
- type: ndcg_at_20
value: 20.62
- type: ndcg_at_3
value: 16.778000000000002
- type: ndcg_at_5
value: 17.742
- type: precision_at_1
value: 14.822
- type: precision_at_10
value: 3.755
- type: precision_at_100
value: 0.8540000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 2.4899999999999998
- type: precision_at_3
value: 8.235000000000001
- type: precision_at_5
value: 5.968
- type: recall_at_1
value: 11.219
- type: recall_at_10
value: 24.784
- type: recall_at_100
value: 43.143
- type: recall_at_1000
value: 68.416
- type: recall_at_20
value: 31.266
- type: recall_at_3
value: 17.607999999999997
- type: recall_at_5
value: 20.468
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval (default)
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: main_score
value: 14.105
- type: map_at_1
value: 9.766
- type: map_at_10
value: 12.35
- type: map_at_100
value: 12.794
- type: map_at_1000
value: 12.876000000000001
- type: map_at_20
value: 12.548
- type: map_at_3
value: 11.583
- type: map_at_5
value: 11.855
- type: mrr_at_1
value: 10.35120147874307
- type: mrr_at_10
value: 13.323137634597895
- type: mrr_at_100
value: 13.8122389813538
- type: mrr_at_1000
value: 13.891191650266954
- type: mrr_at_20
value: 13.550088548700803
- type: mrr_at_3
value: 12.41528034504005
- type: mrr_at_5
value: 12.74799753542822
- type: nauc_map_at_1000_diff1
value: 30.214009272387493
- type: nauc_map_at_1000_max
value: 27.100911874185957
- type: nauc_map_at_1000_std
value: 4.556062715371813
- type: nauc_map_at_100_diff1
value: 30.283972909659536
- type: nauc_map_at_100_max
value: 27.101751795355376
- type: nauc_map_at_100_std
value: 4.530095632746722
- type: nauc_map_at_10_diff1
value: 30.703580851962275
- type: nauc_map_at_10_max
value: 27.45889128777842
- type: nauc_map_at_10_std
value: 4.056332236709348
- type: nauc_map_at_1_diff1
value: 38.44336021108366
- type: nauc_map_at_1_max
value: 31.341289082946698
- type: nauc_map_at_1_std
value: 5.249357458733503
- type: nauc_map_at_20_diff1
value: 30.50519884637743
- type: nauc_map_at_20_max
value: 27.340643104548395
- type: nauc_map_at_20_std
value: 4.165692308941953
- type: nauc_map_at_3_diff1
value: 32.38602261885505
- type: nauc_map_at_3_max
value: 28.903602549949543
- type: nauc_map_at_3_std
value: 3.5402281277974756
- type: nauc_map_at_5_diff1
value: 32.2685825283353
- type: nauc_map_at_5_max
value: 28.485087249150176
- type: nauc_map_at_5_std
value: 3.8418506057303445
- type: nauc_mrr_at_1000_diff1
value: 30.308168307291954
- type: nauc_mrr_at_1000_max
value: 26.895198553568438
- type: nauc_mrr_at_1000_std
value: 6.332711766194871
- type: nauc_mrr_at_100_diff1
value: 30.366219069831494
- type: nauc_mrr_at_100_max
value: 26.88024956005868
- type: nauc_mrr_at_100_std
value: 6.328345475093812
- type: nauc_mrr_at_10_diff1
value: 30.60181659497291
- type: nauc_mrr_at_10_max
value: 27.33947661988829
- type: nauc_mrr_at_10_std
value: 5.98212349517898
- type: nauc_mrr_at_1_diff1
value: 38.01665824488639
- type: nauc_mrr_at_1_max
value: 31.273295508014538
- type: nauc_mrr_at_1_std
value: 7.49596621052432
- type: nauc_mrr_at_20_diff1
value: 30.504642171833616
- type: nauc_mrr_at_20_max
value: 27.093254296264142
- type: nauc_mrr_at_20_std
value: 6.011940896215445
- type: nauc_mrr_at_3_diff1
value: 32.30298334779263
- type: nauc_mrr_at_3_max
value: 28.46795259170204
- type: nauc_mrr_at_3_std
value: 5.233276939737523
- type: nauc_mrr_at_5_diff1
value: 32.317520734292316
- type: nauc_mrr_at_5_max
value: 28.31645764893187
- type: nauc_mrr_at_5_std
value: 5.514394216402804
- type: nauc_ndcg_at_1000_diff1
value: 25.46804692303833
- type: nauc_ndcg_at_1000_max
value: 24.577578434016004
- type: nauc_ndcg_at_1000_std
value: 8.08099372903191
- type: nauc_ndcg_at_100_diff1
value: 25.7728600426837
- type: nauc_ndcg_at_100_max
value: 23.852719795214735
- type: nauc_ndcg_at_100_std
value: 7.271020641236757
- type: nauc_ndcg_at_10_diff1
value: 27.787864887098827
- type: nauc_ndcg_at_10_max
value: 25.82070997315848
- type: nauc_ndcg_at_10_std
value: 4.84958725429997
- type: nauc_ndcg_at_1_diff1
value: 38.01665824488639
- type: nauc_ndcg_at_1_max
value: 31.273295508014538
- type: nauc_ndcg_at_1_std
value: 7.49596621052432
- type: nauc_ndcg_at_20_diff1
value: 27.23687052702463
- type: nauc_ndcg_at_20_max
value: 25.3030643349024
- type: nauc_ndcg_at_20_std
value: 5.128184329356223
- type: nauc_ndcg_at_3_diff1
value: 30.94323024403614
- type: nauc_ndcg_at_3_max
value: 28.112791463025488
- type: nauc_ndcg_at_3_std
value: 3.4748257092667845
- type: nauc_ndcg_at_5_diff1
value: 30.979886062267525
- type: nauc_ndcg_at_5_max
value: 27.832062407091833
- type: nauc_ndcg_at_5_std
value: 4.066523891816962
- type: nauc_precision_at_1000_diff1
value: 13.717212581088436
- type: nauc_precision_at_1000_max
value: 14.726337919465527
- type: nauc_precision_at_1000_std
value: 19.286677279311952
- type: nauc_precision_at_100_diff1
value: 13.83440364507339
- type: nauc_precision_at_100_max
value: 13.983610901499812
- type: nauc_precision_at_100_std
value: 17.767107323199852
- type: nauc_precision_at_10_diff1
value: 18.989269379083463
- type: nauc_precision_at_10_max
value: 20.291510121396815
- type: nauc_precision_at_10_std
value: 8.518048232551553
- type: nauc_precision_at_1_diff1
value: 38.01665824488639
- type: nauc_precision_at_1_max
value: 31.273295508014538
- type: nauc_precision_at_1_std
value: 7.49596621052432
- type: nauc_precision_at_20_diff1
value: 18.381866045394073
- type: nauc_precision_at_20_max
value: 18.90966326296592
- type: nauc_precision_at_20_std
value: 9.141677018751377
- type: nauc_precision_at_3_diff1
value: 26.100613624838605
- type: nauc_precision_at_3_max
value: 24.76218487581011
- type: nauc_precision_at_3_std
value: 2.4322989886641495
- type: nauc_precision_at_5_diff1
value: 26.83172966704407
- type: nauc_precision_at_5_max
value: 24.090343452479146
- type: nauc_precision_at_5_std
value: 4.535854021501322
- type: nauc_recall_at_1000_diff1
value: 13.245456056842464
- type: nauc_recall_at_1000_max
value: 19.61498051994092
- type: nauc_recall_at_1000_std
value: 17.188990206491262
- type: nauc_recall_at_100_diff1
value: 14.025440613222711
- type: nauc_recall_at_100_max
value: 15.06663046965985
- type: nauc_recall_at_100_std
value: 12.610345211569749
- type: nauc_recall_at_10_diff1
value: 21.102550210495654
- type: nauc_recall_at_10_max
value: 21.76066577972798
- type: nauc_recall_at_10_std
value: 5.1852219341177115
- type: nauc_recall_at_1_diff1
value: 38.44336021108366
- type: nauc_recall_at_1_max
value: 31.341289082946698
- type: nauc_recall_at_1_std
value: 5.249357458733503
- type: nauc_recall_at_20_diff1
value: 19.281075192679307
- type: nauc_recall_at_20_max
value: 20.050580691482935
- type: nauc_recall_at_20_std
value: 5.836669306240979
- type: nauc_recall_at_3_diff1
value: 27.334543456325626
- type: nauc_recall_at_3_max
value: 26.711101790009558
- type: nauc_recall_at_3_std
value: 2.3329176939418037
- type: nauc_recall_at_5_diff1
value: 27.75488164284888
- type: nauc_recall_at_5_max
value: 26.285171746330576
- type: nauc_recall_at_5_std
value: 3.361376753158064
- type: ndcg_at_1
value: 10.351
- type: ndcg_at_10
value: 14.105
- type: ndcg_at_100
value: 16.765
- type: ndcg_at_1000
value: 19.220000000000002
- type: ndcg_at_20
value: 14.82
- type: ndcg_at_3
value: 12.398000000000001
- type: ndcg_at_5
value: 12.879999999999999
- type: precision_at_1
value: 10.351
- type: precision_at_10
value: 2.144
- type: precision_at_100
value: 0.373
- type: precision_at_1000
value: 0.062
- type: precision_at_20
value: 1.238
- type: precision_at_3
value: 5.114
- type: precision_at_5
value: 3.401
- type: recall_at_1
value: 9.766
- type: recall_at_10
value: 18.595
- type: recall_at_100
value: 31.669999999999998
- type: recall_at_1000
value: 50.659
- type: recall_at_20
value: 21.248
- type: recall_at_3
value: 13.876
- type: recall_at_5
value: 15.015
task:
type: Retrieval
- dataset:
config: default
name: MTEB CUADAffiliateLicenseLicenseeLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 73.73737373737373
- type: ap
value: 65.8818399825594
- type: ap_weighted
value: 65.8818399825594
- type: f1
value: 72.61993404956918
- type: f1_weighted
value: 72.61993404956918
- type: main_score
value: 73.73737373737373
task:
type: Classification
- dataset:
config: default
name: MTEB CUADAffiliateLicenseLicensorLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 79.54545454545453
- type: ap
value: 73.12252964426878
- type: ap_weighted
value: 73.12252964426878
- type: f1
value: 79.53488372093022
- type: f1_weighted
value: 79.53488372093024
- type: main_score
value: 79.54545454545453
task:
type: Classification
- dataset:
config: default
name: MTEB CUADAntiAssignmentLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.64846416382251
- type: ap
value: 63.215973012261415
- type: ap_weighted
value: 63.215973012261415
- type: f1
value: 68.89855743269304
- type: f1_weighted
value: 68.89855743269304
- type: main_score
value: 70.64846416382251
task:
type: Classification
- dataset:
config: default
name: MTEB CUADAuditRightsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 60.44407894736842
- type: ap
value: 57.470171721677076
- type: ap_weighted
value: 57.470171721677076
- type: f1
value: 57.63732113071247
- type: f1_weighted
value: 57.63732113071247
- type: main_score
value: 60.44407894736842
task:
type: Classification
- dataset:
config: default
name: MTEB CUADCapOnLiabilityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 49.518459069020864
- type: ap
value: 49.761431703402096
- type: ap_weighted
value: 49.761431703402096
- type: f1
value: 49.48302433823829
- type: f1_weighted
value: 49.48302433823827
- type: main_score
value: 49.518459069020864
task:
type: Classification
- dataset:
config: default
name: MTEB CUADChangeOfControlLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 71.875
- type: ap
value: 64.42982456140352
- type: ap_weighted
value: 64.42982456140352
- type: f1
value: 70.87723707120934
- type: f1_weighted
value: 70.8772370712093
- type: main_score
value: 71.875
task:
type: Classification
- dataset:
config: default
name: MTEB CUADCompetitiveRestrictionExceptionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 53.181818181818194
- type: ap
value: 51.65110565110565
- type: ap_weighted
value: 51.65110565110565
- type: f1
value: 47.02513150204559
- type: f1_weighted
value: 47.025131502045596
- type: main_score
value: 53.181818181818194
task:
type: Classification
- dataset:
config: default
name: MTEB CUADCovenantNotToSueLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 67.53246753246754
- type: ap
value: 60.65974025974026
- type: ap_weighted
value: 60.65974025974026
- type: f1
value: 64.03885671586028
- type: f1_weighted
value: 64.03885671586026
- type: main_score
value: 67.53246753246754
task:
type: Classification
- dataset:
config: default
name: MTEB CUADEffectiveDateLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 56.35593220338983
- type: ap
value: 53.54749704375246
- type: ap_weighted
value: 53.54749704375246
- type: f1
value: 56.26090868196132
- type: f1_weighted
value: 56.26090868196131
- type: main_score
value: 56.35593220338983
task:
type: Classification
- dataset:
config: default
name: MTEB CUADExclusivityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 61.154855643044606
- type: ap
value: 56.35333840225783
- type: ap_weighted
value: 56.35333840225783
- type: f1
value: 57.26109628910987
- type: f1_weighted
value: 57.26109628910987
- type: main_score
value: 61.154855643044606
task:
type: Classification
- dataset:
config: default
name: MTEB CUADExpirationDateLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 80.82191780821917
- type: ap
value: 77.03374913905259
- type: ap_weighted
value: 77.03374913905259
- type: f1
value: 80.66062530224343
- type: f1_weighted
value: 80.66062530224343
- type: main_score
value: 80.82191780821917
task:
type: Classification
- dataset:
config: default
name: MTEB CUADGoverningLawLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 92.12328767123289
- type: ap
value: 88.44810149857499
- type: ap_weighted
value: 88.44810149857499
- type: f1
value: 92.12245616092896
- type: f1_weighted
value: 92.12245616092899
- type: main_score
value: 92.12328767123289
task:
type: Classification
- dataset:
config: default
name: MTEB CUADIPOwnershipAssignmentLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 64.0625
- type: ap
value: 59.78260869565217
- type: ap_weighted
value: 59.78260869565217
- type: f1
value: 63.33748443337483
- type: f1_weighted
value: 63.33748443337485
- type: main_score
value: 64.0625
task:
type: Classification
- dataset:
config: default
name: MTEB CUADInsuranceLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 80.3883495145631
- type: ap
value: 76.65387764650838
- type: ap_weighted
value: 76.65387764650838
- type: f1
value: 80.20173184889143
- type: f1_weighted
value: 80.20173184889143
- type: main_score
value: 80.3883495145631
task:
type: Classification
- dataset:
config: default
name: MTEB CUADIrrevocableOrPerpetualLicenseLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 78.21428571428572
- type: ap
value: 70.19711163153788
- type: ap_weighted
value: 70.19711163153788
- type: f1
value: 77.68807722955938
- type: f1_weighted
value: 77.6880772295594
- type: main_score
value: 78.21428571428572
task:
type: Classification
- dataset:
config: default
name: MTEB CUADJointIPOwnershipLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 85.9375
- type: ap
value: 79.55607476635514
- type: ap_weighted
value: 79.55607476635514
- type: f1
value: 85.89119015866969
- type: f1_weighted
value: 85.89119015866969
- type: main_score
value: 85.9375
task:
type: Classification
- dataset:
config: default
name: MTEB CUADLicenseGrantLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 72.56446991404013
- type: ap
value: 65.06701026209069
- type: ap_weighted
value: 65.06701026209069
- type: f1
value: 71.72168495320604
- type: f1_weighted
value: 71.72168495320604
- type: main_score
value: 72.56446991404013
task:
type: Classification
- dataset:
config: default
name: MTEB CUADLiquidatedDamagesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 80.45454545454544
- type: ap
value: 73.2605583392985
- type: ap_weighted
value: 73.2605583392985
- type: f1
value: 80.33713703726801
- type: f1_weighted
value: 80.33713703726798
- type: main_score
value: 80.45454545454544
task:
type: Classification
- dataset:
config: default
name: MTEB CUADMinimumCommitmentLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 75.51813471502591
- type: ap
value: 68.84511159342107
- type: ap_weighted
value: 68.84511159342107
- type: f1
value: 75.48815213647933
- type: f1_weighted
value: 75.48815213647931
- type: main_score
value: 75.51813471502591
task:
type: Classification
- dataset:
config: default
name: MTEB CUADMostFavoredNationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 73.4375
- type: ap
value: 65.80668604651162
- type: ap_weighted
value: 65.80668604651162
- type: f1
value: 72.62893081761007
- type: f1_weighted
value: 72.62893081761007
- type: main_score
value: 73.4375
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNoSolicitOfCustomersLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 82.14285714285714
- type: ap
value: 73.68421052631578
- type: ap_weighted
value: 73.68421052631578
- type: f1
value: 81.55467720685114
- type: f1_weighted
value: 81.55467720685111
- type: main_score
value: 82.14285714285714
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNoSolicitOfEmployeesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 88.02816901408453
- type: ap
value: 81.23742454728371
- type: ap_weighted
value: 81.23742454728371
- type: f1
value: 87.92698174543636
- type: f1_weighted
value: 87.92698174543636
- type: main_score
value: 88.02816901408453
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNonCompeteLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 53.84615384615385
- type: ap
value: 52.05651491365778
- type: ap_weighted
value: 52.05651491365778
- type: f1
value: 53.70967410723452
- type: f1_weighted
value: 53.70967410723452
- type: main_score
value: 53.84615384615385
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNonDisparagementLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 82.0
- type: ap
value: 73.75757575757575
- type: ap_weighted
value: 73.75757575757575
- type: f1
value: 81.5270935960591
- type: f1_weighted
value: 81.5270935960591
- type: main_score
value: 82.0
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNonTransferableLicenseLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 72.69372693726936
- type: ap
value: 68.36025144171039
- type: ap_weighted
value: 68.36025144171039
- type: f1
value: 72.20320188509251
- type: f1_weighted
value: 72.20320188509251
- type: main_score
value: 72.69372693726936
task:
type: Classification
- dataset:
config: default
name: MTEB CUADNoticePeriodToTerminateRenewalLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 81.53153153153154
- type: ap
value: 73.22254687119553
- type: ap_weighted
value: 73.22254687119553
- type: f1
value: 81.003861003861
- type: f1_weighted
value: 81.003861003861
- type: main_score
value: 81.53153153153154
task:
type: Classification
- dataset:
config: default
name: MTEB CUADPostTerminationServicesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 59.52970297029702
- type: ap
value: 55.494262149873045
- type: ap_weighted
value: 55.494262149873045
- type: f1
value: 58.91289033889372
- type: f1_weighted
value: 58.91289033889372
- type: main_score
value: 59.52970297029702
task:
type: Classification
- dataset:
config: default
name: MTEB CUADPriceRestrictionsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 86.95652173913044
- type: ap
value: 80.11272141706925
- type: ap_weighted
value: 80.11272141706925
- type: f1
value: 86.85714285714286
- type: f1_weighted
value: 86.85714285714286
- type: main_score
value: 86.95652173913044
task:
type: Classification
- dataset:
config: default
name: MTEB CUADRenewalTermLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 81.86528497409327
- type: ap
value: 74.56574832804549
- type: ap_weighted
value: 74.56574832804549
- type: f1
value: 81.72348484848484
- type: f1_weighted
value: 81.72348484848484
- type: main_score
value: 81.86528497409327
task:
type: Classification
- dataset:
config: default
name: MTEB CUADRevenueProfitSharingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 78.9405684754522
- type: ap
value: 75.88346617170725
- type: ap_weighted
value: 75.88346617170725
- type: f1
value: 78.5609048595758
- type: f1_weighted
value: 78.5609048595758
- type: main_score
value: 78.9405684754522
task:
type: Classification
- dataset:
config: default
name: MTEB CUADRofrRofoRofnLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 67.53623188405797
- type: ap
value: 61.059567408520365
- type: ap_weighted
value: 61.059567408520365
- type: f1
value: 66.55819428096656
- type: f1_weighted
value: 66.55819428096656
- type: main_score
value: 67.53623188405797
task:
type: Classification
- dataset:
config: default
name: MTEB CUADSourceCodeEscrowLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 79.66101694915253
- type: ap
value: 73.06967984934086
- type: ap_weighted
value: 73.06967984934086
- type: f1
value: 79.63761863675583
- type: f1_weighted
value: 79.63761863675583
- type: main_score
value: 79.66101694915253
task:
type: Classification
- dataset:
config: default
name: MTEB CUADTerminationForConvenienceLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 82.55813953488372
- type: ap
value: 76.9289284938057
- type: ap_weighted
value: 76.9289284938057
- type: f1
value: 82.5580452030568
- type: f1_weighted
value: 82.55804520305684
- type: main_score
value: 82.55813953488372
task:
type: Classification
- dataset:
config: default
name: MTEB CUADThirdPartyBeneficiaryLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 86.76470588235293
- type: ap
value: 82.30837789661318
- type: ap_weighted
value: 82.30837789661318
- type: f1
value: 86.76184295911746
- type: f1_weighted
value: 86.76184295911744
- type: main_score
value: 86.76470588235293
task:
type: Classification
- dataset:
config: default
name: MTEB CUADUncappedLiabilityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 78.91156462585033
- type: ap
value: 70.63036269784295
- type: ap_weighted
value: 70.63036269784295
- type: f1
value: 78.23054507237377
- type: f1_weighted
value: 78.23054507237376
- type: main_score
value: 78.91156462585033
task:
type: Classification
- dataset:
config: default
name: MTEB CUADUnlimitedAllYouCanEatLicenseLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 75.0
- type: ap
value: 67.5
- type: ap_weighted
value: 67.5
- type: f1
value: 74.60317460317461
- type: f1_weighted
value: 74.60317460317461
- type: main_score
value: 75.0
task:
type: Classification
- dataset:
config: default
name: MTEB CUADVolumeRestrictionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 68.32298136645963
- type: ap
value: 67.47730530339226
- type: ap_weighted
value: 67.47730530339226
- type: f1
value: 65.23267138078504
- type: f1_weighted
value: 65.23267138078504
- type: main_score
value: 68.32298136645963
task:
type: Classification
- dataset:
config: default
name: MTEB CUADWarrantyDurationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 77.18749999999999
- type: ap
value: 70.84930981595093
- type: ap_weighted
value: 70.84930981595093
- type: f1
value: 77.18549481888057
- type: f1_weighted
value: 77.18549481888057
- type: main_score
value: 77.18749999999999
task:
type: Classification
- dataset:
config: default
name: MTEB CanadaTaxCourtOutcomesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 45.90163934426229
- type: f1
value: 41.86755057433674
- type: f1_weighted
value: 52.49140373560517
- type: main_score
value: 45.90163934426229
task:
type: Classification
- dataset:
config: default
name: MTEB ClimateFEVER (default)
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: main_score
value: 5.558
- type: map_at_1
value: 2.099
- type: map_at_10
value: 3.6790000000000003
- type: map_at_100
value: 4.021
- type: map_at_1000
value: 4.083
- type: map_at_20
value: 3.843
- type: map_at_3
value: 3.107
- type: map_at_5
value: 3.398
- type: mrr_at_1
value: 4.364820846905538
- type: mrr_at_10
value: 7.478723954293985
- type: mrr_at_100
value: 8.041420875649584
- type: mrr_at_1000
value: 8.120754871238086
- type: mrr_at_20
value: 7.760020669319687
- type: mrr_at_3
value: 6.438653637350702
- type: mrr_at_5
value: 7.028230184581975
- type: nauc_map_at_1000_diff1
value: 26.989583880363355
- type: nauc_map_at_1000_max
value: 19.651932768180743
- type: nauc_map_at_1000_std
value: 28.682949493303113
- type: nauc_map_at_100_diff1
value: 27.123176019982058
- type: nauc_map_at_100_max
value: 19.598769909181605
- type: nauc_map_at_100_std
value: 28.431702256094276
- type: nauc_map_at_10_diff1
value: 28.090105463174243
- type: nauc_map_at_10_max
value: 19.316825624764327
- type: nauc_map_at_10_std
value: 27.879940536760657
- type: nauc_map_at_1_diff1
value: 38.86635884960338
- type: nauc_map_at_1_max
value: 23.66935741341746
- type: nauc_map_at_1_std
value: 25.594810836643088
- type: nauc_map_at_20_diff1
value: 27.932097656688153
- type: nauc_map_at_20_max
value: 19.705436224378094
- type: nauc_map_at_20_std
value: 28.005161889024915
- type: nauc_map_at_3_diff1
value: 31.343508506514787
- type: nauc_map_at_3_max
value: 17.617676175693653
- type: nauc_map_at_3_std
value: 27.372138781240235
- type: nauc_map_at_5_diff1
value: 29.21950281006726
- type: nauc_map_at_5_max
value: 18.039174755804527
- type: nauc_map_at_5_std
value: 26.278075304640147
- type: nauc_mrr_at_1000_diff1
value: 21.017635057347793
- type: nauc_mrr_at_1000_max
value: 20.84007387790555
- type: nauc_mrr_at_1000_std
value: 24.684523933084744
- type: nauc_mrr_at_100_diff1
value: 21.051698171004
- type: nauc_mrr_at_100_max
value: 20.79459868740917
- type: nauc_mrr_at_100_std
value: 24.62077347403019
- type: nauc_mrr_at_10_diff1
value: 21.926692626233184
- type: nauc_mrr_at_10_max
value: 20.868215747512338
- type: nauc_mrr_at_10_std
value: 24.10229968572614
- type: nauc_mrr_at_1_diff1
value: 32.12007148649377
- type: nauc_mrr_at_1_max
value: 25.428643110489634
- type: nauc_mrr_at_1_std
value: 19.946229629460547
- type: nauc_mrr_at_20_diff1
value: 21.617935715645125
- type: nauc_mrr_at_20_max
value: 21.046484288936377
- type: nauc_mrr_at_20_std
value: 24.297367370651244
- type: nauc_mrr_at_3_diff1
value: 24.094623370861303
- type: nauc_mrr_at_3_max
value: 19.713811945549196
- type: nauc_mrr_at_3_std
value: 23.568839477173757
- type: nauc_mrr_at_5_diff1
value: 22.3010395396166
- type: nauc_mrr_at_5_max
value: 20.569180907488864
- type: nauc_mrr_at_5_std
value: 23.15568498862624
- type: nauc_ndcg_at_1000_diff1
value: 17.73440786298746
- type: nauc_ndcg_at_1000_max
value: 21.164734898511266
- type: nauc_ndcg_at_1000_std
value: 32.20409116224434
- type: nauc_ndcg_at_100_diff1
value: 19.491657641927414
- type: nauc_ndcg_at_100_max
value: 19.73425182329514
- type: nauc_ndcg_at_100_std
value: 29.633697891721162
- type: nauc_ndcg_at_10_diff1
value: 23.236666416810397
- type: nauc_ndcg_at_10_max
value: 19.859686062177957
- type: nauc_ndcg_at_10_std
value: 27.607123060751103
- type: nauc_ndcg_at_1_diff1
value: 32.12007148649377
- type: nauc_ndcg_at_1_max
value: 25.428643110489634
- type: nauc_ndcg_at_1_std
value: 19.946229629460547
- type: nauc_ndcg_at_20_diff1
value: 22.766492789770794
- type: nauc_ndcg_at_20_max
value: 20.68653243447615
- type: nauc_ndcg_at_20_std
value: 27.80598558578259
- type: nauc_ndcg_at_3_diff1
value: 26.430176145767764
- type: nauc_ndcg_at_3_max
value: 17.178786585572514
- type: nauc_ndcg_at_3_std
value: 26.551392559385945
- type: nauc_ndcg_at_5_diff1
value: 24.359838503352492
- type: nauc_ndcg_at_5_max
value: 18.139249994062958
- type: nauc_ndcg_at_5_std
value: 25.04579441208386
- type: nauc_precision_at_1000_diff1
value: 3.5941753705590855
- type: nauc_precision_at_1000_max
value: 23.295418071068074
- type: nauc_precision_at_1000_std
value: 37.823737794558035
- type: nauc_precision_at_100_diff1
value: 7.711362755764835
- type: nauc_precision_at_100_max
value: 21.000892665907962
- type: nauc_precision_at_100_std
value: 35.56596455340648
- type: nauc_precision_at_10_diff1
value: 14.603402002580449
- type: nauc_precision_at_10_max
value: 22.112935744796918
- type: nauc_precision_at_10_std
value: 30.665912790934176
- type: nauc_precision_at_1_diff1
value: 32.12007148649377
- type: nauc_precision_at_1_max
value: 25.428643110489634
- type: nauc_precision_at_1_std
value: 19.946229629460547
- type: nauc_precision_at_20_diff1
value: 14.716417574100266
- type: nauc_precision_at_20_max
value: 23.926389785704096
- type: nauc_precision_at_20_std
value: 30.69168946837732
- type: nauc_precision_at_3_diff1
value: 18.67632522519008
- type: nauc_precision_at_3_max
value: 15.461714107477059
- type: nauc_precision_at_3_std
value: 24.408621037612654
- type: nauc_precision_at_5_diff1
value: 14.433484685750017
- type: nauc_precision_at_5_max
value: 18.682282289432337
- type: nauc_precision_at_5_std
value: 24.03615092175192
- type: nauc_recall_at_1000_diff1
value: 7.5569286948470955
- type: nauc_recall_at_1000_max
value: 18.988365246129565
- type: nauc_recall_at_1000_std
value: 32.73921563811838
- type: nauc_recall_at_100_diff1
value: 12.11778715469688
- type: nauc_recall_at_100_max
value: 16.608390547005357
- type: nauc_recall_at_100_std
value: 29.88269190630321
- type: nauc_recall_at_10_diff1
value: 20.008263704255814
- type: nauc_recall_at_10_max
value: 19.07669508851797
- type: nauc_recall_at_10_std
value: 28.95827325426037
- type: nauc_recall_at_1_diff1
value: 38.86635884960338
- type: nauc_recall_at_1_max
value: 23.66935741341746
- type: nauc_recall_at_1_std
value: 25.594810836643088
- type: nauc_recall_at_20_diff1
value: 19.54693652826011
- type: nauc_recall_at_20_max
value: 20.582517703572815
- type: nauc_recall_at_20_std
value: 28.52204311008764
- type: nauc_recall_at_3_diff1
value: 25.95757457673112
- type: nauc_recall_at_3_max
value: 13.802011828871594
- type: nauc_recall_at_3_std
value: 28.160988060479163
- type: nauc_recall_at_5_diff1
value: 21.718874199874673
- type: nauc_recall_at_5_max
value: 15.812170162395233
- type: nauc_recall_at_5_std
value: 24.970427791223297
- type: ndcg_at_1
value: 4.365
- type: ndcg_at_10
value: 5.558
- type: ndcg_at_100
value: 7.637
- type: ndcg_at_1000
value: 9.700000000000001
- type: ndcg_at_20
value: 6.215
- type: ndcg_at_3
value: 4.314
- type: ndcg_at_5
value: 4.795
- type: precision_at_1
value: 4.365
- type: precision_at_10
value: 1.6740000000000002
- type: precision_at_100
value: 0.384
- type: precision_at_1000
value: 0.076
- type: precision_at_20
value: 1.111
- type: precision_at_3
value: 3.084
- type: precision_at_5
value: 2.423
- type: recall_at_1
value: 2.099
- type: recall_at_10
value: 7.371999999999999
- type: recall_at_100
value: 14.976999999999999
- type: recall_at_1000
value: 27.328000000000003
- type: recall_at_20
value: 9.288
- type: recall_at_3
value: 4.299
- type: recall_at_5
value: 5.509
task:
type: Retrieval
- dataset:
config: default
name: MTEB ContractNLIConfidentialityOfAgreementLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 64.63414634146342
- type: ap
value: 59.62772785622593
- type: ap_weighted
value: 59.62772785622593
- type: f1
value: 64.58674609084142
- type: f1_weighted
value: 64.58674609084142
- type: main_score
value: 64.63414634146342
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIExplicitIdentificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 56.88073394495412
- type: ap
value: 21.457096600107935
- type: ap_weighted
value: 21.457096600107935
- type: f1
value: 50.91501389288109
- type: f1_weighted
value: 61.74750556638211
- type: main_score
value: 56.88073394495412
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIInclusionOfVerballyConveyedInformationLegalBenchClassification
(default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 60.431654676258994
- type: ap
value: 55.25139990309542
- type: ap_weighted
value: 55.25139990309542
- type: f1
value: 60.4234611999793
- type: f1_weighted
value: 60.435751414398844
- type: main_score
value: 60.431654676258994
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLILimitedUseLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 73.07692307692307
- type: ap
value: 63.954526895988565
- type: ap_weighted
value: 63.954526895988565
- type: f1
value: 73.01454916133815
- type: f1_weighted
value: 73.10187264315704
- type: main_score
value: 73.07692307692307
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLINoLicensingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 82.09876543209876
- type: ap
value: 75.19529587058324
- type: ap_weighted
value: 75.19529587058324
- type: f1
value: 82.08169647965215
- type: f1_weighted
value: 82.0748688986735
- type: main_score
value: 82.09876543209876
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLINoticeOnCompelledDisclosureLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 78.87323943661971
- type: ap
value: 72.12365099689045
- type: ap_weighted
value: 72.12365099689045
- type: f1
value: 78.83545310015897
- type: f1_weighted
value: 78.83545310015897
- type: main_score
value: 78.87323943661971
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIPermissibleAcquirementOfSimilarInformationLegalBenchClassification
(default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 72.47191011235954
- type: ap
value: 64.74719101123597
- type: ap_weighted
value: 64.74719101123597
- type: f1
value: 71.08377813877931
- type: f1_weighted
value: 71.08377813877931
- type: main_score
value: 72.47191011235954
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIPermissibleCopyLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 41.379310344827594
- type: ap
value: 19.168356997971607
- type: ap_weighted
value: 19.168356997971607
- type: f1
value: 38.75776397515528
- type: f1_weighted
value: 46.18547868922682
- type: main_score
value: 41.379310344827594
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIPermissibleDevelopmentOfSimilarInformationLegalBenchClassification
(default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 71.3235294117647
- type: ap
value: 65.14279624893436
- type: ap_weighted
value: 65.14279624893436
- type: f1
value: 71.3219789132198
- type: f1_weighted
value: 71.3219789132198
- type: main_score
value: 71.3235294117647
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIPermissiblePostAgreementPossessionLegalBenchClassification
(default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 39.63963963963964
- type: ap
value: 25.290389847351868
- type: ap_weighted
value: 25.290389847351868
- type: f1
value: 39.56115400243804
- type: f1_weighted
value: 40.64033151396011
- type: main_score
value: 39.63963963963964
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLIReturnOfConfidentialInformationLegalBenchClassification
(default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 71.21212121212122
- type: ap
value: 63.13978196600149
- type: ap_weighted
value: 63.13978196600149
- type: f1
value: 70.88460645460877
- type: f1_weighted
value: 70.7910308096052
- type: main_score
value: 71.21212121212122
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLISharingWithEmployeesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 73.52941176470588
- type: ap
value: 66.24576478752499
- type: ap_weighted
value: 66.24576478752499
- type: f1
value: 71.13098607494621
- type: f1_weighted
value: 71.42467085328414
- type: main_score
value: 73.52941176470588
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLISharingWithThirdPartiesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 68.88888888888889
- type: ap
value: 51.569719636083924
- type: ap_weighted
value: 51.569719636083924
- type: f1
value: 66.28762541806019
- type: f1_weighted
value: 68.26458565589
- type: main_score
value: 68.88888888888889
task:
type: Classification
- dataset:
config: default
name: MTEB ContractNLISurvivalOfObligationsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 49.044585987261144
- type: ap
value: 47.085151843488305
- type: ap_weighted
value: 47.085151843488305
- type: f1
value: 48.28722002635046
- type: f1_weighted
value: 47.92846772907698
- type: main_score
value: 49.044585987261144
task:
type: Classification
- dataset:
config: default
name: MTEB CorporateLobbyingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.40816326530613
- type: ap
value: 29.59183673469388
- type: ap_weighted
value: 29.59183673469388
- type: f1
value: 41.31736526946107
- type: f1_weighted
value: 58.181595991690074
- type: main_score
value: 70.40816326530613
task:
type: Classification
- dataset:
config: default
name: MTEB CyrillicTurkicLangClassification (default)
revision: e42d330f33d65b7b72dfd408883daf1661f06f18
split: test
type: tatiana-merz/cyrillic_turkic_langs
metrics:
- type: accuracy
value: 61.19140625
- type: f1
value: 59.377085898563365
- type: f1_weighted
value: 59.385881195883925
- type: main_score
value: 61.19140625
task:
type: Classification
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: dev
type: mteb/dbpedia
metrics:
- type: main_score
value: 7.161
- type: map_at_1
value: 0.599
- type: map_at_10
value: 2.243
- type: map_at_100
value: 3.1189999999999998
- type: map_at_1000
value: 3.488
- type: map_at_20
value: 2.522
- type: map_at_3
value: 1.397
- type: map_at_5
value: 1.951
- type: mrr_at_1
value: 8.955223880597014
- type: mrr_at_10
value: 18.287728026533994
- type: mrr_at_100
value: 18.978113584928742
- type: mrr_at_1000
value: 19.053758841865573
- type: mrr_at_20
value: 18.61199952617863
- type: mrr_at_3
value: 14.676616915422885
- type: mrr_at_5
value: 17.06467661691542
- type: nauc_map_at_1000_diff1
value: -2.930033724497058
- type: nauc_map_at_1000_max
value: 3.5995430754716904
- type: nauc_map_at_1000_std
value: 5.61203479120595
- type: nauc_map_at_100_diff1
value: -5.4531441891668795
- type: nauc_map_at_100_max
value: -0.0055832626529105185
- type: nauc_map_at_100_std
value: 3.439773391163607
- type: nauc_map_at_10_diff1
value: -14.3319757103363
- type: nauc_map_at_10_max
value: -9.021024411612359
- type: nauc_map_at_10_std
value: 1.0275253768638628
- type: nauc_map_at_1_diff1
value: 22.607506151253776
- type: nauc_map_at_1_max
value: 10.921408762597743
- type: nauc_map_at_1_std
value: -2.0177080867009054
- type: nauc_map_at_20_diff1
value: -11.794157692538237
- type: nauc_map_at_20_max
value: -6.44484538876576
- type: nauc_map_at_20_std
value: 1.039851694368717
- type: nauc_map_at_3_diff1
value: -7.469347804676409
- type: nauc_map_at_3_max
value: -5.393936026725367
- type: nauc_map_at_3_std
value: 9.280689460783249
- type: nauc_map_at_5_diff1
value: -15.955321054747321
- type: nauc_map_at_5_max
value: -9.855092671604572
- type: nauc_map_at_5_std
value: 0.06180279408320787
- type: nauc_mrr_at_1000_diff1
value: -2.821396337906413
- type: nauc_mrr_at_1000_max
value: 5.972877383405757
- type: nauc_mrr_at_1000_std
value: -1.6896049835004336
- type: nauc_mrr_at_100_diff1
value: -2.8632536639982105
- type: nauc_mrr_at_100_max
value: 5.973020236396294
- type: nauc_mrr_at_100_std
value: -1.809958349128643
- type: nauc_mrr_at_10_diff1
value: -4.515463799529893
- type: nauc_mrr_at_10_max
value: 5.030384515417533
- type: nauc_mrr_at_10_std
value: -1.547480529694615
- type: nauc_mrr_at_1_diff1
value: 8.719512377821816
- type: nauc_mrr_at_1_max
value: 16.272382792823382
- type: nauc_mrr_at_1_std
value: -3.187491782487964
- type: nauc_mrr_at_20_diff1
value: -2.908929872190089
- type: nauc_mrr_at_20_max
value: 6.58409584409903
- type: nauc_mrr_at_20_std
value: -1.1174417761572792
- type: nauc_mrr_at_3_diff1
value: -1.6595580931793985
- type: nauc_mrr_at_3_max
value: 9.640215787928428
- type: nauc_mrr_at_3_std
value: 2.889288978742377
- type: nauc_mrr_at_5_diff1
value: -6.89298539225687
- type: nauc_mrr_at_5_max
value: 6.578043390443974
- type: nauc_mrr_at_5_std
value: -0.6581933130437475
- type: nauc_ndcg_at_1000_diff1
value: 3.75625342513744
- type: nauc_ndcg_at_1000_max
value: 6.952585708583143
- type: nauc_ndcg_at_1000_std
value: 5.400684775811628
- type: nauc_ndcg_at_100_diff1
value: -2.242186789473446
- type: nauc_ndcg_at_100_max
value: 1.7125259047701242
- type: nauc_ndcg_at_100_std
value: -0.6824733710981048
- type: nauc_ndcg_at_10_diff1
value: -11.969827974466098
- type: nauc_ndcg_at_10_max
value: -4.424965429405649
- type: nauc_ndcg_at_10_std
value: 0.03592313276976773
- type: nauc_ndcg_at_1_diff1
value: -4.197220327746547
- type: nauc_ndcg_at_1_max
value: 9.247135683163954
- type: nauc_ndcg_at_1_std
value: -6.671985136155276
- type: nauc_ndcg_at_20_diff1
value: -8.358422632396593
- type: nauc_ndcg_at_20_max
value: -1.0551974757194074
- type: nauc_ndcg_at_20_std
value: 2.0508581550409524
- type: nauc_ndcg_at_3_diff1
value: -7.53212458402589
- type: nauc_ndcg_at_3_max
value: 3.6347588818172336
- type: nauc_ndcg_at_3_std
value: 5.073680163820697
- type: nauc_ndcg_at_5_diff1
value: -17.183713921651613
- type: nauc_ndcg_at_5_max
value: -2.598662858319381
- type: nauc_ndcg_at_5_std
value: -0.4734708395726036
- type: nauc_precision_at_1000_diff1
value: 22.034829237918075
- type: nauc_precision_at_1000_max
value: 29.133045600628414
- type: nauc_precision_at_1000_std
value: 22.48207630228867
- type: nauc_precision_at_100_diff1
value: 22.17246050117164
- type: nauc_precision_at_100_max
value: 25.497860199414003
- type: nauc_precision_at_100_std
value: 14.10941839109608
- type: nauc_precision_at_10_diff1
value: -2.3976462009254527
- type: nauc_precision_at_10_max
value: 3.2185747947259737
- type: nauc_precision_at_10_std
value: 1.1160090019272848
- type: nauc_precision_at_1_diff1
value: 8.719512377821816
- type: nauc_precision_at_1_max
value: 16.272382792823382
- type: nauc_precision_at_1_std
value: -3.187491782487964
- type: nauc_precision_at_20_diff1
value: 8.125877087406765
- type: nauc_precision_at_20_max
value: 14.004634012058606
- type: nauc_precision_at_20_std
value: 6.076987698320296
- type: nauc_precision_at_3_diff1
value: -5.415944490965941
- type: nauc_precision_at_3_max
value: 6.0110244505222
- type: nauc_precision_at_3_std
value: 6.0205421596952675
- type: nauc_precision_at_5_diff1
value: -19.55829195099795
- type: nauc_precision_at_5_max
value: -2.3847548504000993
- type: nauc_precision_at_5_std
value: -4.296125770063572
- type: nauc_recall_at_1000_diff1
value: 5.793923275597914
- type: nauc_recall_at_1000_max
value: 2.365078190964481
- type: nauc_recall_at_1000_std
value: 3.5546888704254744
- type: nauc_recall_at_100_diff1
value: 1.652314810086157
- type: nauc_recall_at_100_max
value: 1.2466358966197024
- type: nauc_recall_at_100_std
value: -5.516640557428562
- type: nauc_recall_at_10_diff1
value: -18.83385802183443
- type: nauc_recall_at_10_max
value: -15.04302952000884
- type: nauc_recall_at_10_std
value: -0.9615025531726922
- type: nauc_recall_at_1_diff1
value: 22.607506151253776
- type: nauc_recall_at_1_max
value: 10.921408762597743
- type: nauc_recall_at_1_std
value: -2.0177080867009054
- type: nauc_recall_at_20_diff1
value: -8.960549697900921
- type: nauc_recall_at_20_max
value: -6.8364201397227164
- type: nauc_recall_at_20_std
value: -1.2091707122721411
- type: nauc_recall_at_3_diff1
value: -17.196135512311084
- type: nauc_recall_at_3_max
value: -10.816815002699384
- type: nauc_recall_at_3_std
value: 12.535755202753904
- type: nauc_recall_at_5_diff1
value: -23.856486271404066
- type: nauc_recall_at_5_max
value: -13.129773406696268
- type: nauc_recall_at_5_std
value: -2.885196394596191
- type: ndcg_at_1
value: 6.715999999999999
- type: ndcg_at_10
value: 7.161
- type: ndcg_at_100
value: 9.506
- type: ndcg_at_1000
value: 14.194
- type: ndcg_at_20
value: 6.969
- type: ndcg_at_3
value: 7.285
- type: ndcg_at_5
value: 7.436
- type: precision_at_1
value: 8.955
- type: precision_at_10
value: 6.866
- type: precision_at_100
value: 2.343
- type: precision_at_1000
value: 0.557
- type: precision_at_20
value: 5.0
- type: precision_at_3
value: 9.453
- type: precision_at_5
value: 8.955
- type: recall_at_1
value: 0.599
- type: recall_at_10
value: 5.234
- type: recall_at_100
value: 14.610999999999999
- type: recall_at_1000
value: 31.723000000000003
- type: recall_at_20
value: 6.797000000000001
- type: recall_at_3
value: 2.1239999999999997
- type: recall_at_5
value: 3.836
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia (default)
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: main_score
value: 9.612
- type: map_at_1
value: 1.5150000000000001
- type: map_at_10
value: 3.324
- type: map_at_100
value: 4.593
- type: map_at_1000
value: 4.942
- type: map_at_20
value: 3.775
- type: map_at_3
value: 2.349
- type: map_at_5
value: 2.83
- type: mrr_at_1
value: 17.75
- type: mrr_at_10
value: 25.455257936507948
- type: mrr_at_100
value: 26.384386588195795
- type: mrr_at_1000
value: 26.43428730177263
- type: mrr_at_20
value: 26.012663071147983
- type: mrr_at_3
value: 22.916666666666668
- type: mrr_at_5
value: 24.42916666666667
- type: nauc_map_at_1000_diff1
value: 22.13041079857
- type: nauc_map_at_1000_max
value: 30.847169046279717
- type: nauc_map_at_1000_std
value: 26.662372161640164
- type: nauc_map_at_100_diff1
value: 22.33437365695696
- type: nauc_map_at_100_max
value: 30.631982988659413
- type: nauc_map_at_100_std
value: 24.343041349757826
- type: nauc_map_at_10_diff1
value: 24.027517719649303
- type: nauc_map_at_10_max
value: 25.07712884251914
- type: nauc_map_at_10_std
value: 13.947979384184976
- type: nauc_map_at_1_diff1
value: 36.83267850021598
- type: nauc_map_at_1_max
value: 19.169430946850284
- type: nauc_map_at_1_std
value: 9.884774862276792
- type: nauc_map_at_20_diff1
value: 23.514668795309415
- type: nauc_map_at_20_max
value: 27.504950445908978
- type: nauc_map_at_20_std
value: 17.094975030047124
- type: nauc_map_at_3_diff1
value: 26.34278610573698
- type: nauc_map_at_3_max
value: 20.845843284715972
- type: nauc_map_at_3_std
value: 7.67049397964597
- type: nauc_map_at_5_diff1
value: 25.7750795640811
- type: nauc_map_at_5_max
value: 22.947480091712098
- type: nauc_map_at_5_std
value: 11.721230195408548
- type: nauc_mrr_at_1000_diff1
value: 22.232372488450842
- type: nauc_mrr_at_1000_max
value: 27.572890316358283
- type: nauc_mrr_at_1000_std
value: 16.214637981707586
- type: nauc_mrr_at_100_diff1
value: 22.236444609236038
- type: nauc_mrr_at_100_max
value: 27.58760243571819
- type: nauc_mrr_at_100_std
value: 16.244413870712897
- type: nauc_mrr_at_10_diff1
value: 22.225463768969977
- type: nauc_mrr_at_10_max
value: 28.085279372515014
- type: nauc_mrr_at_10_std
value: 16.63553736106648
- type: nauc_mrr_at_1_diff1
value: 29.84035077607877
- type: nauc_mrr_at_1_max
value: 29.694489641199347
- type: nauc_mrr_at_1_std
value: 13.521637546163495
- type: nauc_mrr_at_20_diff1
value: 22.04153237789325
- type: nauc_mrr_at_20_max
value: 27.694203519607907
- type: nauc_mrr_at_20_std
value: 16.41753082494305
- type: nauc_mrr_at_3_diff1
value: 23.699732601185406
- type: nauc_mrr_at_3_max
value: 28.552272889924087
- type: nauc_mrr_at_3_std
value: 15.054097838038286
- type: nauc_mrr_at_5_diff1
value: 23.127326455282443
- type: nauc_mrr_at_5_max
value: 28.769272111978832
- type: nauc_mrr_at_5_std
value: 16.113310297737975
- type: nauc_ndcg_at_1000_diff1
value: 19.30064409197478
- type: nauc_ndcg_at_1000_max
value: 28.102160223624878
- type: nauc_ndcg_at_1000_std
value: 30.203518553202162
- type: nauc_ndcg_at_100_diff1
value: 18.61374183566408
- type: nauc_ndcg_at_100_max
value: 26.626236693773404
- type: nauc_ndcg_at_100_std
value: 25.742758699186076
- type: nauc_ndcg_at_10_diff1
value: 22.519496459830016
- type: nauc_ndcg_at_10_max
value: 29.403797316052678
- type: nauc_ndcg_at_10_std
value: 20.893386965358616
- type: nauc_ndcg_at_1_diff1
value: 32.866635298438084
- type: nauc_ndcg_at_1_max
value: 26.59719751655438
- type: nauc_ndcg_at_1_std
value: 11.114394574061539
- type: nauc_ndcg_at_20_diff1
value: 21.157000991633115
- type: nauc_ndcg_at_20_max
value: 27.740565719664534
- type: nauc_ndcg_at_20_std
value: 21.639809971682443
- type: nauc_ndcg_at_3_diff1
value: 25.11861929994868
- type: nauc_ndcg_at_3_max
value: 30.05796948174576
- type: nauc_ndcg_at_3_std
value: 15.558218990994382
- type: nauc_ndcg_at_5_diff1
value: 23.56633730677446
- type: nauc_ndcg_at_5_max
value: 29.407157319632233
- type: nauc_ndcg_at_5_std
value: 18.567271816504054
- type: nauc_precision_at_1000_diff1
value: 15.34548548807785
- type: nauc_precision_at_1000_max
value: 10.572226641262324
- type: nauc_precision_at_1000_std
value: 29.1034314360236
- type: nauc_precision_at_100_diff1
value: 15.716430228733962
- type: nauc_precision_at_100_max
value: 29.095076486854232
- type: nauc_precision_at_100_std
value: 38.5066690028862
- type: nauc_precision_at_10_diff1
value: 19.68952528017596
- type: nauc_precision_at_10_max
value: 36.890169328577436
- type: nauc_precision_at_10_std
value: 30.965796095297055
- type: nauc_precision_at_1_diff1
value: 29.84035077607877
- type: nauc_precision_at_1_max
value: 29.694489641199347
- type: nauc_precision_at_1_std
value: 13.521637546163495
- type: nauc_precision_at_20_diff1
value: 18.030808015274253
- type: nauc_precision_at_20_max
value: 37.61603054850129
- type: nauc_precision_at_20_std
value: 34.160861586371816
- type: nauc_precision_at_3_diff1
value: 20.899695298609572
- type: nauc_precision_at_3_max
value: 35.736648108449906
- type: nauc_precision_at_3_std
value: 21.012939343933635
- type: nauc_precision_at_5_diff1
value: 20.038574686656855
- type: nauc_precision_at_5_max
value: 37.244225604024464
- type: nauc_precision_at_5_std
value: 27.105877764557317
- type: nauc_recall_at_1000_diff1
value: 7.621037010770166
- type: nauc_recall_at_1000_max
value: 14.556069262959875
- type: nauc_recall_at_1000_std
value: 24.912834855259458
- type: nauc_recall_at_100_diff1
value: 5.640854515267624
- type: nauc_recall_at_100_max
value: 12.319243091931583
- type: nauc_recall_at_100_std
value: 18.20593364111766
- type: nauc_recall_at_10_diff1
value: 9.625612977495116
- type: nauc_recall_at_10_max
value: 17.05920473206263
- type: nauc_recall_at_10_std
value: 10.7221437835498
- type: nauc_recall_at_1_diff1
value: 36.83267850021598
- type: nauc_recall_at_1_max
value: 19.169430946850284
- type: nauc_recall_at_1_std
value: 9.884774862276792
- type: nauc_recall_at_20_diff1
value: 8.05059067573258
- type: nauc_recall_at_20_max
value: 15.8154139120262
- type: nauc_recall_at_20_std
value: 12.679202204644218
- type: nauc_recall_at_3_diff1
value: 16.446191987706968
- type: nauc_recall_at_3_max
value: 16.891019665567892
- type: nauc_recall_at_3_std
value: 5.902427268316366
- type: nauc_recall_at_5_diff1
value: 16.441740431697145
- type: nauc_recall_at_5_max
value: 18.339945932093187
- type: nauc_recall_at_5_std
value: 11.244004704766795
- type: ndcg_at_1
value: 13.0
- type: ndcg_at_10
value: 9.612
- type: ndcg_at_100
value: 11.403
- type: ndcg_at_1000
value: 15.142
- type: ndcg_at_20
value: 9.419
- type: ndcg_at_3
value: 10.821
- type: ndcg_at_5
value: 10.462
- type: precision_at_1
value: 17.75
- type: precision_at_10
value: 9.15
- type: precision_at_100
value: 3.0
- type: precision_at_1000
value: 0.716
- type: precision_at_20
value: 6.763
- type: precision_at_3
value: 13.417000000000002
- type: precision_at_5
value: 12.35
- type: recall_at_1
value: 1.5150000000000001
- type: recall_at_10
value: 5.858
- type: recall_at_100
value: 15.643
- type: recall_at_1000
value: 28.51
- type: recall_at_20
value: 8.25
- type: recall_at_3
value: 2.995
- type: recall_at_5
value: 4.117
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBpediaClassification (default)
revision: 9abd46cf7fc8b4c64290f26993c540b92aa145ac
split: test
type: fancyzhx/dbpedia_14
metrics:
- type: accuracy
value: 79.6484375
- type: f1
value: 78.34279956840108
- type: f1_weighted
value: 78.35088313144212
- type: main_score
value: 79.6484375
task:
type: Classification
- dataset:
config: default
name: MTEB DefinitionClassificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 84.51757666417352
- type: ap
value: 80.76707736262222
- type: ap_weighted
value: 80.76707736262222
- type: f1
value: 84.51702233000746
- type: f1_weighted
value: 84.52014045969152
- type: main_score
value: 84.51757666417352
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity1LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 76.33333333333334
- type: ap
value: 23.666666666666668
- type: ap_weighted
value: 23.666666666666668
- type: f1
value: 43.28922495274102
- type: f1_weighted
value: 66.08821676118463
- type: main_score
value: 76.33333333333334
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity2LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 74.66666666666669
- type: ap
value: 25.333333333333336
- type: ap_weighted
value: 25.333333333333336
- type: f1
value: 42.74809160305343
- type: f1_weighted
value: 63.83715012722646
- type: main_score
value: 74.66666666666669
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity3LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 58.666666666666664
- type: ap
value: 58.666666666666664
- type: ap_weighted
value: 58.666666666666664
- type: f1
value: 36.97478991596639
- type: f1_weighted
value: 43.383753501400555
- type: main_score
value: 58.666666666666664
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity4LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 53.333333333333336
- type: ap
value: 53.333333333333336
- type: ap_weighted
value: 53.333333333333336
- type: f1
value: 34.782608695652165
- type: f1_weighted
value: 37.10144927536233
- type: main_score
value: 53.333333333333336
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity5LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 57.333333333333336
- type: ap
value: 57.333333333333336
- type: ap_weighted
value: 57.333333333333336
- type: f1
value: 36.440677966101696
- type: f1_weighted
value: 41.78531073446328
- type: main_score
value: 57.333333333333336
task:
type: Classification
- dataset:
config: default
name: MTEB Diversity6LegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 55.33333333333334
- type: ap
value: 55.335312709510575
- type: ap_weighted
value: 55.335312709510575
- type: f1
value: 53.72075888745626
- type: f1_weighted
value: 54.239086387916736
- type: main_score
value: 55.33333333333334
task:
type: Classification
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 29.500000000000004
- type: f1
value: 25.366180985174143
- type: f1_weighted
value: 31.616367697127934
- type: main_score
value: 29.500000000000004
task:
type: Classification
- dataset:
config: default
name: MTEB EmotionClassification (default)
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: validation
type: mteb/emotion
metrics:
- type: accuracy
value: 29.59
- type: f1
value: 25.66115067003055
- type: f1_weighted
value: 31.610928656113497
- type: main_score
value: 29.59
task:
type: Classification
- dataset:
config: default
name: MTEB FaithDial (default)
revision: 7a414e80725eac766f2602676dc8b39f80b061e4
split: test
type: McGill-NLP/FaithDial
metrics:
- type: main_score
value: 13.203999999999999
- type: map_at_1
value: 4.603
- type: map_at_10
value: 9.689
- type: map_at_100
value: 10.934000000000001
- type: map_at_1000
value: 11.06
- type: map_at_20
value: 10.282
- type: map_at_3
value: 7.46
- type: map_at_5
value: 8.601
- type: mrr_at_1
value: 3.9177277179236047
- type: mrr_at_10
value: 9.372463970896874
- type: mrr_at_100
value: 10.603150618822562
- type: mrr_at_1000
value: 10.7286670506961
- type: mrr_at_20
value: 9.954996988904508
- type: mrr_at_3
value: 7.190662748938949
- type: mrr_at_5
value: 8.24844923277832
- type: nauc_map_at_1000_diff1
value: 5.307634687499811
- type: nauc_map_at_1000_max
value: 2.3021513473591937
- type: nauc_map_at_1000_std
value: -17.73170584094867
- type: nauc_map_at_100_diff1
value: 5.297350465897308
- type: nauc_map_at_100_max
value: 2.346907480087932
- type: nauc_map_at_100_std
value: -17.732933045818474
- type: nauc_map_at_10_diff1
value: 6.045977877604437
- type: nauc_map_at_10_max
value: 1.8368181824684384
- type: nauc_map_at_10_std
value: -19.787304492799954
- type: nauc_map_at_1_diff1
value: 1.3052717698444036
- type: nauc_map_at_1_max
value: -4.135496842891768
- type: nauc_map_at_1_std
value: -19.25157996189646
- type: nauc_map_at_20_diff1
value: 5.761740069816983
- type: nauc_map_at_20_max
value: 2.2984777745182807
- type: nauc_map_at_20_std
value: -18.75124467493425
- type: nauc_map_at_3_diff1
value: 6.651930299284997
- type: nauc_map_at_3_max
value: -0.3272549806355308
- type: nauc_map_at_3_std
value: -21.098596102590484
- type: nauc_map_at_5_diff1
value: 6.967992538819455
- type: nauc_map_at_5_max
value: 0.5435787268710469
- type: nauc_map_at_5_std
value: -20.283953347398604
- type: nauc_mrr_at_1000_diff1
value: 6.740910238395446
- type: nauc_mrr_at_1000_max
value: 2.260193924794291
- type: nauc_mrr_at_1000_std
value: -16.012044193795997
- type: nauc_mrr_at_100_diff1
value: 6.722495330136685
- type: nauc_mrr_at_100_max
value: 2.303043406886841
- type: nauc_mrr_at_100_std
value: -16.020952265971687
- type: nauc_mrr_at_10_diff1
value: 7.499027953700563
- type: nauc_mrr_at_10_max
value: 1.7369780903909435
- type: nauc_mrr_at_10_std
value: -17.773058332780796
- type: nauc_mrr_at_1_diff1
value: 7.479923371906451
- type: nauc_mrr_at_1_max
value: -6.618146247607683
- type: nauc_mrr_at_1_std
value: -17.69446400002114
- type: nauc_mrr_at_20_diff1
value: 7.167945669605475
- type: nauc_mrr_at_20_max
value: 2.272029597435147
- type: nauc_mrr_at_20_std
value: -17.15567528957464
- type: nauc_mrr_at_3_diff1
value: 8.689535713040886
- type: nauc_mrr_at_3_max
value: -0.503459138449647
- type: nauc_mrr_at_3_std
value: -18.50457781869527
- type: nauc_mrr_at_5_diff1
value: 8.688882139587488
- type: nauc_mrr_at_5_max
value: 0.6822164815544203
- type: nauc_mrr_at_5_std
value: -18.323678647634363
- type: nauc_ndcg_at_1000_diff1
value: 3.895349559751926
- type: nauc_ndcg_at_1000_max
value: 4.497321779831305
- type: nauc_ndcg_at_1000_std
value: -11.297185296929218
- type: nauc_ndcg_at_100_diff1
value: 2.8704577253134365
- type: nauc_ndcg_at_100_max
value: 5.389954929442454
- type: nauc_ndcg_at_100_std
value: -10.400630555415756
- type: nauc_ndcg_at_10_diff1
value: 6.092068255087623
- type: nauc_ndcg_at_10_max
value: 4.227250873974054
- type: nauc_ndcg_at_10_std
value: -19.171869390880573
- type: nauc_ndcg_at_1_diff1
value: 1.3052717698444036
- type: nauc_ndcg_at_1_max
value: -4.135496842891768
- type: nauc_ndcg_at_1_std
value: -19.25157996189646
- type: nauc_ndcg_at_20_diff1
value: 5.40179215063042
- type: nauc_ndcg_at_20_max
value: 5.316262069583032
- type: nauc_ndcg_at_20_std
value: -16.253163982932534
- type: nauc_ndcg_at_3_diff1
value: 7.419223521385511
- type: nauc_ndcg_at_3_max
value: 0.5830467018062534
- type: nauc_ndcg_at_3_std
value: -21.398247993882336
- type: nauc_ndcg_at_5_diff1
value: 7.871015584820952
- type: nauc_ndcg_at_5_max
value: 1.911179358773651
- type: nauc_ndcg_at_5_std
value: -20.05509945356285
- type: nauc_precision_at_1000_diff1
value: -0.844755882557819
- type: nauc_precision_at_1000_max
value: 9.219453102597015
- type: nauc_precision_at_1000_std
value: 29.23861313970078
- type: nauc_precision_at_100_diff1
value: -3.7470853890619606
- type: nauc_precision_at_100_max
value: 10.533862037156355
- type: nauc_precision_at_100_std
value: 8.252086567057157
- type: nauc_precision_at_10_diff1
value: 5.901773888339623
- type: nauc_precision_at_10_max
value: 8.111412609207008
- type: nauc_precision_at_10_std
value: -18.07076007909741
- type: nauc_precision_at_1_diff1
value: 1.3052717698444036
- type: nauc_precision_at_1_max
value: -4.135496842891768
- type: nauc_precision_at_1_std
value: -19.25157996189646
- type: nauc_precision_at_20_diff1
value: 4.510193698541817
- type: nauc_precision_at_20_max
value: 10.055538647436114
- type: nauc_precision_at_20_std
value: -11.60139299594993
- type: nauc_precision_at_3_diff1
value: 8.853244226690453
- type: nauc_precision_at_3_max
value: 2.3906768293455305
- type: nauc_precision_at_3_std
value: -21.96838812494048
- type: nauc_precision_at_5_diff1
value: 9.38307261489558
- type: nauc_precision_at_5_max
value: 4.352929382840095
- type: nauc_precision_at_5_std
value: -19.535985352739786
- type: nauc_recall_at_1000_diff1
value: -0.8447558825574738
- type: nauc_recall_at_1000_max
value: 9.219453102597296
- type: nauc_recall_at_1000_std
value: 29.23861313970089
- type: nauc_recall_at_100_diff1
value: -3.747085389061965
- type: nauc_recall_at_100_max
value: 10.533862037156396
- type: nauc_recall_at_100_std
value: 8.252086567057194
- type: nauc_recall_at_10_diff1
value: 5.901773888339621
- type: nauc_recall_at_10_max
value: 8.111412609207008
- type: nauc_recall_at_10_std
value: -18.07076007909743
- type: nauc_recall_at_1_diff1
value: 1.3052717698444036
- type: nauc_recall_at_1_max
value: -4.135496842891768
- type: nauc_recall_at_1_std
value: -19.25157996189646
- type: nauc_recall_at_20_diff1
value: 4.510193698541801
- type: nauc_recall_at_20_max
value: 10.055538647436121
- type: nauc_recall_at_20_std
value: -11.601392995949936
- type: nauc_recall_at_3_diff1
value: 8.853244226690453
- type: nauc_recall_at_3_max
value: 2.390676829345526
- type: nauc_recall_at_3_std
value: -21.96838812494048
- type: nauc_recall_at_5_diff1
value: 9.383072614895593
- type: nauc_recall_at_5_max
value: 4.352929382840121
- type: nauc_recall_at_5_std
value: -19.535985352739782
- type: ndcg_at_1
value: 4.603
- type: ndcg_at_10
value: 13.203999999999999
- type: ndcg_at_100
value: 20.254
- type: ndcg_at_1000
value: 23.923
- type: ndcg_at_20
value: 15.354000000000001
- type: ndcg_at_3
value: 8.469
- type: ndcg_at_5
value: 10.536
- type: precision_at_1
value: 4.603
- type: precision_at_10
value: 2.478
- type: precision_at_100
value: 0.6
- type: precision_at_1000
value: 0.09
- type: precision_at_20
value: 1.6629999999999998
- type: precision_at_3
value: 3.803
- type: precision_at_5
value: 3.2910000000000004
- type: recall_at_1
value: 4.603
- type: recall_at_10
value: 24.779999999999998
- type: recall_at_100
value: 60.039
- type: recall_at_1000
value: 89.667
- type: recall_at_20
value: 33.251999999999995
- type: recall_at_3
value: 11.41
- type: recall_at_5
value: 16.454
task:
type: Retrieval
- dataset:
config: default
name: MTEB FeedbackQARetrieval (default)
revision: 1ee1cd0
split: test
type: lt2c/fqa
metrics:
- type: main_score
value: 19.026
- type: map_at_1
value: 19.026
- type: map_at_10
value: 26.287
- type: map_at_100
value: 27.294
- type: map_at_1000
value: 27.381
- type: map_at_20
value: 26.823999999999998
- type: map_at_3
value: 24.18
- type: map_at_5
value: 25.365
- type: mrr_at_1
value: 19.026104417670684
- type: mrr_at_10
value: 26.287052973799952
- type: mrr_at_100
value: 27.29426430169323
- type: mrr_at_1000
value: 27.380630702740504
- type: mrr_at_20
value: 26.824443943374348
- type: mrr_at_3
value: 24.1800535475234
- type: mrr_at_5
value: 25.364792503346674
- type: nauc_map_at_1000_diff1
value: 40.81899763873748
- type: nauc_map_at_1000_max
value: 11.253631614437268
- type: nauc_map_at_1000_std
value: 1.5897060898020656
- type: nauc_map_at_100_diff1
value: 40.78701343792848
- type: nauc_map_at_100_max
value: 11.27294926630661
- type: nauc_map_at_100_std
value: 1.6118772584552687
- type: nauc_map_at_10_diff1
value: 41.075611489073324
- type: nauc_map_at_10_max
value: 11.521202364241029
- type: nauc_map_at_10_std
value: 1.2931734299571058
- type: nauc_map_at_1_diff1
value: 48.17546169609799
- type: nauc_map_at_1_max
value: 13.494189949598375
- type: nauc_map_at_1_std
value: 0.07263746580580938
- type: nauc_map_at_20_diff1
value: 40.841882938863435
- type: nauc_map_at_20_max
value: 11.418649006248861
- type: nauc_map_at_20_std
value: 1.4175148500460242
- type: nauc_map_at_3_diff1
value: 42.213517992662815
- type: nauc_map_at_3_max
value: 12.808728940816176
- type: nauc_map_at_3_std
value: 1.0861600000182654
- type: nauc_map_at_5_diff1
value: 41.6309141720988
- type: nauc_map_at_5_max
value: 11.996308489388992
- type: nauc_map_at_5_std
value: 1.2641645150076395
- type: nauc_mrr_at_1000_diff1
value: 40.81899763873748
- type: nauc_mrr_at_1000_max
value: 11.253631614437268
- type: nauc_mrr_at_1000_std
value: 1.5897060898020656
- type: nauc_mrr_at_100_diff1
value: 40.78701343792848
- type: nauc_mrr_at_100_max
value: 11.27294926630661
- type: nauc_mrr_at_100_std
value: 1.6118772584552687
- type: nauc_mrr_at_10_diff1
value: 41.075611489073324
- type: nauc_mrr_at_10_max
value: 11.521202364241029
- type: nauc_mrr_at_10_std
value: 1.2931734299571058
- type: nauc_mrr_at_1_diff1
value: 48.17546169609799
- type: nauc_mrr_at_1_max
value: 13.494189949598375
- type: nauc_mrr_at_1_std
value: 0.07263746580580938
- type: nauc_mrr_at_20_diff1
value: 40.841882938863435
- type: nauc_mrr_at_20_max
value: 11.418649006248861
- type: nauc_mrr_at_20_std
value: 1.4175148500460242
- type: nauc_mrr_at_3_diff1
value: 42.213517992662815
- type: nauc_mrr_at_3_max
value: 12.808728940816176
- type: nauc_mrr_at_3_std
value: 1.0861600000182654
- type: nauc_mrr_at_5_diff1
value: 41.6309141720988
- type: nauc_mrr_at_5_max
value: 11.996308489388992
- type: nauc_mrr_at_5_std
value: 1.2641645150076395
- type: nauc_ndcg_at_1000_diff1
value: 37.7525819268389
- type: nauc_ndcg_at_1000_max
value: 8.537400436184365
- type: nauc_ndcg_at_1000_std
value: 2.9622195950411925
- type: nauc_ndcg_at_100_diff1
value: 36.787603237032975
- type: nauc_ndcg_at_100_max
value: 8.608543884213873
- type: nauc_ndcg_at_100_std
value: 3.8384319334640695
- type: nauc_ndcg_at_10_diff1
value: 38.17646042200737
- type: nauc_ndcg_at_10_max
value: 10.09464701041161
- type: nauc_ndcg_at_10_std
value: 1.82746325273071
- type: nauc_ndcg_at_1_diff1
value: 48.17546169609799
- type: nauc_ndcg_at_1_max
value: 13.494189949598375
- type: nauc_ndcg_at_1_std
value: 0.07263746580580938
- type: nauc_ndcg_at_20_diff1
value: 37.27227964097512
- type: nauc_ndcg_at_20_max
value: 9.739171990515723
- type: nauc_ndcg_at_20_std
value: 2.3086094833252115
- type: nauc_ndcg_at_3_diff1
value: 40.37281782985726
- type: nauc_ndcg_at_3_max
value: 12.624015391541455
- type: nauc_ndcg_at_3_std
value: 1.407593942089084
- type: nauc_ndcg_at_5_diff1
value: 39.35750963645447
- type: nauc_ndcg_at_5_max
value: 11.236243459280038
- type: nauc_ndcg_at_5_std
value: 1.722451235770262
- type: nauc_precision_at_1000_diff1
value: 12.726040453874319
- type: nauc_precision_at_1000_max
value: -30.085818447743566
- type: nauc_precision_at_1000_std
value: 15.649828948529738
- type: nauc_precision_at_100_diff1
value: 20.374750836627285
- type: nauc_precision_at_100_max
value: -4.315521193959148
- type: nauc_precision_at_100_std
value: 15.928528368224907
- type: nauc_precision_at_10_diff1
value: 30.394845120941987
- type: nauc_precision_at_10_max
value: 5.92964609786744
- type: nauc_precision_at_10_std
value: 3.297191207595148
- type: nauc_precision_at_1_diff1
value: 48.17546169609799
- type: nauc_precision_at_1_max
value: 13.494189949598375
- type: nauc_precision_at_1_std
value: 0.07263746580580938
- type: nauc_precision_at_20_diff1
value: 26.72269495712158
- type: nauc_precision_at_20_max
value: 4.521447508378409
- type: nauc_precision_at_20_std
value: 5.180527682236829
- type: nauc_precision_at_3_diff1
value: 35.59077406479908
- type: nauc_precision_at_3_max
value: 12.151097771811763
- type: nauc_precision_at_3_std
value: 2.24486462426719
- type: nauc_precision_at_5_diff1
value: 33.428016378866076
- type: nauc_precision_at_5_max
value: 9.15731660897423
- type: nauc_precision_at_5_std
value: 2.9353909916486294
- type: nauc_recall_at_1000_diff1
value: 12.726040453874369
- type: nauc_recall_at_1000_max
value: -30.085818447743364
- type: nauc_recall_at_1000_std
value: 15.649828948529635
- type: nauc_recall_at_100_diff1
value: 20.374750836627264
- type: nauc_recall_at_100_max
value: -4.315521193959231
- type: nauc_recall_at_100_std
value: 15.928528368224876
- type: nauc_recall_at_10_diff1
value: 30.394845120942005
- type: nauc_recall_at_10_max
value: 5.929646097867471
- type: nauc_recall_at_10_std
value: 3.297191207595157
- type: nauc_recall_at_1_diff1
value: 48.17546169609799
- type: nauc_recall_at_1_max
value: 13.494189949598375
- type: nauc_recall_at_1_std
value: 0.07263746580580938
- type: nauc_recall_at_20_diff1
value: 26.722694957121647
- type: nauc_recall_at_20_max
value: 4.521447508378419
- type: nauc_recall_at_20_std
value: 5.1805276822368524
- type: nauc_recall_at_3_diff1
value: 35.59077406479911
- type: nauc_recall_at_3_max
value: 12.151097771811772
- type: nauc_recall_at_3_std
value: 2.2448646242671857
- type: nauc_recall_at_5_diff1
value: 33.42801637886615
- type: nauc_recall_at_5_max
value: 9.15731660897428
- type: nauc_recall_at_5_std
value: 2.9353909916486782
- type: ndcg_at_1
value: 19.026
- type: ndcg_at_10
value: 30.245
- type: ndcg_at_100
value: 35.716
- type: ndcg_at_1000
value: 38.421
- type: ndcg_at_20
value: 32.242
- type: ndcg_at_3
value: 25.884
- type: ndcg_at_5
value: 28.016999999999996
- type: precision_at_1
value: 19.026
- type: precision_at_10
value: 4.287
- type: precision_at_100
value: 0.697
- type: precision_at_1000
value: 0.092
- type: precision_at_20
value: 2.543
- type: precision_at_3
value: 10.274
- type: precision_at_5
value: 7.199
- type: recall_at_1
value: 19.026
- type: recall_at_10
value: 42.870999999999995
- type: recall_at_100
value: 69.729
- type: recall_at_1000
value: 91.968
- type: recall_at_20
value: 50.853
- type: recall_at_3
value: 30.823
- type: recall_at_5
value: 35.994
task:
type: Retrieval
- dataset:
config: default
name: MTEB FinancialPhrasebankClassification (default)
revision: 1484d06fe7af23030c7c977b12556108d1f67039
split: train
type: takala/financial_phrasebank
metrics:
- type: accuracy
value: 67.97703180212015
- type: f1
value: 57.55594804795911
- type: f1_weighted
value: 68.01782223640284
- type: main_score
value: 67.97703180212015
task:
type: Classification
- dataset:
config: default
name: MTEB FrenkEnClassification (default)
revision: 52483dba0ff23291271ee9249839865e3c3e7e50
split: test
type: classla/FRENK-hate-en
metrics:
- type: accuracy
value: 55.289004780530206
- type: ap
value: 41.78925787378802
- type: ap_weighted
value: 41.78925787378802
- type: f1
value: 54.04961911556596
- type: f1_weighted
value: 54.99825667370393
- type: main_score
value: 55.289004780530206
task:
type: Classification
- dataset:
config: default
name: MTEB FunctionOfDecisionSectionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 16.621253405994548
- type: f1
value: 15.693085823082844
- type: f1_weighted
value: 15.880480382757908
- type: main_score
value: 16.621253405994548
task:
type: Classification
- dataset:
config: default
name: MTEB GPUSpeedTask (default)
revision: '1.0'
split: test
type: 'GPUSpeedTask'
metrics:
- type: avg_words_per_sec
value: 7186456.843601672
- type: main_score
value: 7186456.843601672
- type: num_gpus
value: 300
- type: physical_cores
value: 3600
- type: time_mean
value: 5.055342401776995
- type: time_std
value: 1.0630782067852145
- type: total_cores
value: 7200
task:
type: Speed
- dataset:
config: default
name: MTEB GeoreviewClassification (default)
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
split: test
type: ai-forever/georeview-classification
metrics:
- type: accuracy
value: 41.3623046875
- type: f1
value: 39.78804299557415
- type: f1_weighted
value: 39.787468620260825
- type: main_score
value: 41.3623046875
task:
type: Classification
- dataset:
config: default
name: MTEB GeoreviewClusteringP2P (default)
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
split: test
type: ai-forever/georeview-clustering-p2p
metrics:
- type: main_score
value: 59.713474431847416
- type: v_measure
value: 59.713474431847416
- type: v_measure_std
value: 1.1676689250848244
task:
type: Clustering
- dataset:
config: default
name: MTEB HeadlineClassification (default)
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
split: test
type: ai-forever/headline-classification
metrics:
- type: accuracy
value: 68.9013671875
- type: f1
value: 68.80041842725984
- type: f1_weighted
value: 68.80034868754102
- type: main_score
value: 68.9013671875
task:
type: Classification
- dataset:
config: default
name: MTEB ImdbClassification (default)
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 58.35799999999999
- type: ap
value: 55.16102855038145
- type: ap_weighted
value: 55.16102855038145
- type: f1
value: 57.51452465161078
- type: f1_weighted
value: 57.514524651610785
- type: main_score
value: 58.35799999999999
task:
type: Classification
- dataset:
config: default
name: MTEB InappropriatenessClassification (default)
revision: 601651fdc45ef243751676e62dd7a19f491c0285
split: test
type: ai-forever/inappropriateness-classification
metrics:
- type: accuracy
value: 59.11132812499999
- type: ap
value: 55.4713646939923
- type: ap_weighted
value: 55.4713646939923
- type: f1
value: 58.8968409989092
- type: f1_weighted
value: 58.8968409989092
- type: main_score
value: 59.11132812499999
task:
type: Classification
- dataset:
config: default
name: MTEB InsurancePolicyInterpretationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 20.30075187969925
- type: f1
value: 11.25
- type: f1_weighted
value: 6.851503759398496
- type: main_score
value: 20.30075187969925
task:
type: Classification
- dataset:
config: default
name: MTEB InternationalCitizenshipQuestionsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 60.107421875
- type: ap
value: 46.4447988877498
- type: ap_weighted
value: 46.4447988877498
- type: f1
value: 56.153528268151675
- type: f1_weighted
value: 58.210838762771935
- type: main_score
value: 60.107421875
task:
type: Classification
- dataset:
config: default
name: MTEB JCrewBlockerLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 79.62962962962962
- type: ap
value: 86.55394524959743
- type: ap_weighted
value: 86.55394524959743
- type: f1
value: 61.60310277957336
- type: f1_weighted
value: 79.14242620124973
- type: main_score
value: 79.62962962962962
task:
type: Classification
- dataset:
config: default
name: MTEB KinopoiskClassification (default)
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
split: test
type: ai-forever/kinopoisk-sentiment-classification
metrics:
- type: accuracy
value: 50.46666666666666
- type: f1
value: 49.1239356856144
- type: f1_weighted
value: 49.123935685614384
- type: main_score
value: 50.46666666666666
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsBenefitsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 66.66666666666667
- type: ap
value: 61.11111111111111
- type: ap_weighted
value: 61.11111111111111
- type: f1
value: 66.66666666666667
- type: f1_weighted
value: 66.66666666666667
- type: main_score
value: 66.66666666666667
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsBusinessLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.11494252873564
- type: ap
value: 68.24378508420207
- type: ap_weighted
value: 68.24378508420207
- type: f1
value: 68.07339449541284
- type: f1_weighted
value: 68.07339449541284
- type: main_score
value: 70.11494252873564
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsConsumerLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 58.143322475570045
- type: ap
value: 54.72001493806926
- type: ap_weighted
value: 54.72001493806926
- type: f1
value: 58.13788145283024
- type: f1_weighted
value: 58.13788145283024
- type: main_score
value: 58.143322475570045
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsCourtsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 60.41666666666667
- type: ap
value: 56.07638888888889
- type: ap_weighted
value: 56.07638888888889
- type: f1
value: 59.78835978835979
- type: f1_weighted
value: 59.78835978835979
- type: main_score
value: 60.41666666666667
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsCrimeLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.63953488372093
- type: ap
value: 65.3728949478749
- type: ap_weighted
value: 65.3728949478749
- type: f1
value: 70.45754079263989
- type: f1_weighted
value: 70.45754079263989
- type: main_score
value: 70.63953488372093
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsDivorceLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 62.66666666666667
- type: ap
value: 57.45794392523364
- type: ap_weighted
value: 57.45794392523364
- type: f1
value: 60.886571056062586
- type: f1_weighted
value: 60.886571056062586
- type: main_score
value: 62.66666666666667
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsDomesticViolenceLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 68.39080459770115
- type: ap
value: 62.26053639846742
- type: ap_weighted
value: 62.26053639846742
- type: f1
value: 68.30601092896174
- type: f1_weighted
value: 68.30601092896174
- type: main_score
value: 68.39080459770115
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsEducationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.64285714285714
- type: ap
value: 62.222222222222214
- type: ap_weighted
value: 62.222222222222214
- type: f1
value: 66.56129258868984
- type: f1_weighted
value: 66.56129258868984
- type: main_score
value: 69.64285714285714
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsEmploymentLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 63.521126760563384
- type: ap
value: 58.7392648574373
- type: ap_weighted
value: 58.7392648574373
- type: f1
value: 63.4682967433563
- type: f1_weighted
value: 63.4682967433563
- type: main_score
value: 63.521126760563384
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsEstatesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.78651685393258
- type: ap
value: 64.05564472980203
- type: ap_weighted
value: 64.05564472980203
- type: f1
value: 70.54855542828051
- type: f1_weighted
value: 70.54855542828051
- type: main_score
value: 70.78651685393258
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsFamilyLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 75.48828125
- type: ap
value: 68.42998798076924
- type: ap_weighted
value: 68.42998798076924
- type: f1
value: 75.3630731744256
- type: f1_weighted
value: 75.3630731744256
- type: main_score
value: 75.48828125
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsHealthLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 64.60176991150443
- type: ap
value: 58.96246566981995
- type: ap_weighted
value: 58.96246566981995
- type: f1
value: 63.877567329976834
- type: f1_weighted
value: 63.877567329976834
- type: main_score
value: 64.60176991150443
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsHousingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 48.73046875
- type: ap
value: 49.376600701618464
- type: ap_weighted
value: 49.376600701618464
- type: f1
value: 46.38903847304493
- type: f1_weighted
value: 46.38903847304493
- type: main_score
value: 48.73046875
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsImmigrationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 83.5820895522388
- type: ap
value: 77.43325625394155
- type: ap_weighted
value: 77.43325625394155
- type: f1
value: 83.5674470457079
- type: f1_weighted
value: 83.5674470457079
- type: main_score
value: 83.5820895522388
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsTortsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 63.19444444444444
- type: ap
value: 58.41384863123993
- type: ap_weighted
value: 58.41384863123993
- type: f1
value: 63.17846287451151
- type: f1_weighted
value: 63.17846287451151
- type: main_score
value: 63.19444444444444
task:
type: Classification
- dataset:
config: default
name: MTEB LearnedHandsTrafficLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.7841726618705
- type: ap
value: 62.353917770760766
- type: ap_weighted
value: 62.353917770760766
- type: f1
value: 66.90476190476191
- type: f1_weighted
value: 66.90476190476191
- type: main_score
value: 69.7841726618705
task:
type: Classification
- dataset:
config: default
name: MTEB LegalReasoningCausalityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 56.36363636363636
- type: ap
value: 64.75724991854024
- type: ap_weighted
value: 64.75724991854024
- type: f1
value: 52.85714285714286
- type: f1_weighted
value: 51.220779220779214
- type: main_score
value: 56.36363636363636
task:
type: Classification
- dataset:
config: default
name: MTEB MAUDLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 27.607421875
- type: f1
value: 14.84669450435061
- type: f1_weighted
value: 28.881436838109853
- type: main_score
value: 27.607421875
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 5.208473436449227
- type: f1
value: 3.062867346742466
- type: f1_weighted
value: 3.5821384620305414
- type: main_score
value: 5.208473436449227
task:
type: Classification
- dataset:
config: ko
name: MTEB MassiveIntentClassification (ko)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.5319435104236723
- type: f1
value: 0.5994050487142139
- type: f1_weighted
value: 1.0538452549913138
- type: main_score
value: 2.5319435104236723
task:
type: Classification
- dataset:
config: hi
name: MTEB MassiveIntentClassification (hi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.558843308675185
- type: f1
value: 1.258311921873436
- type: f1_weighted
value: 1.4083594758704836
- type: main_score
value: 2.558843308675185
task:
type: Classification
- dataset:
config: kn
name: MTEB MassiveIntentClassification (kn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.0645595158036314
- type: f1
value: 1.2240987569096886
- type: f1_weighted
value: 1.0817495786784068
- type: main_score
value: 2.0645595158036314
task:
type: Classification
- dataset:
config: ka
name: MTEB MassiveIntentClassification (ka)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.6395427034297243
- type: f1
value: 0.7660068670322584
- type: f1_weighted
value: 0.7729737527960681
- type: main_score
value: 2.6395427034297243
task:
type: Classification
- dataset:
config: am
name: MTEB MassiveIntentClassification (am)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.276395427034297
- type: f1
value: 0.7755708386766476
- type: f1_weighted
value: 0.9189927682322296
- type: main_score
value: 2.276395427034297
task:
type: Classification
- dataset:
config: my
name: MTEB MassiveIntentClassification (my)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.9576328177538667
- type: f1
value: 1.0681259563998668
- type: f1_weighted
value: 1.5818553042962555
- type: main_score
value: 3.9576328177538667
task:
type: Classification
- dataset:
config: el
name: MTEB MassiveIntentClassification (el)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 9.663752521856086
- type: f1
value: 4.860476294706458
- type: f1_weighted
value: 6.8590598543643395
- type: main_score
value: 9.663752521856086
task:
type: Classification
- dataset:
config: lv
name: MTEB MassiveIntentClassification (lv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 22.32347007397445
- type: f1
value: 20.939653553666744
- type: f1_weighted
value: 20.899939110877806
- type: main_score
value: 22.32347007397445
task:
type: Classification
- dataset:
config: ml
name: MTEB MassiveIntentClassification (ml)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.390719569603228
- type: f1
value: 0.46817075523593493
- type: f1_weighted
value: 0.8438228708667787
- type: main_score
value: 2.390719569603228
task:
type: Classification
- dataset:
config: mn
name: MTEB MassiveIntentClassification (mn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.994620040349695
- type: f1
value: 27.571069823401256
- type: f1_weighted
value: 27.263930155378503
- type: main_score
value: 28.994620040349695
task:
type: Classification
- dataset:
config: ur
name: MTEB MassiveIntentClassification (ur)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.4478816408876933
- type: f1
value: 1.497656725806116
- type: f1_weighted
value: 1.5398763678691354
- type: main_score
value: 2.4478816408876933
task:
type: Classification
- dataset:
config: fa
name: MTEB MassiveIntentClassification (fa)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.3355749831876267
- type: f1
value: 0.6816922655284716
- type: f1_weighted
value: 1.0887948480367862
- type: main_score
value: 3.3355749831876267
task:
type: Classification
- dataset:
config: ro
name: MTEB MassiveIntentClassification (ro)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.72494956287828
- type: f1
value: 29.577749786404826
- type: f1_weighted
value: 29.551193355600514
- type: main_score
value: 31.72494956287828
task:
type: Classification
- dataset:
config: is
name: MTEB MassiveIntentClassification (is)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.845326160053798
- type: f1
value: 22.11363990784136
- type: f1_weighted
value: 23.65026728412048
- type: main_score
value: 24.845326160053798
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 50.164761264290526
- type: f1
value: 47.85763581891828
- type: f1_weighted
value: 48.98444884040328
- type: main_score
value: 50.164761264290526
task:
type: Classification
- dataset:
config: hu
name: MTEB MassiveIntentClassification (hu)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 25.524546065904502
- type: f1
value: 23.753046097467873
- type: f1_weighted
value: 23.826312126027823
- type: main_score
value: 25.524546065904502
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.50638870208473
- type: f1
value: 31.370642915213388
- type: f1_weighted
value: 30.505546915456012
- type: main_score
value: 31.50638870208473
task:
type: Classification
- dataset:
config: th
name: MTEB MassiveIntentClassification (th)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.739071956960323
- type: f1
value: 1.411228354273586
- type: f1_weighted
value: 1.216275118762689
- type: main_score
value: 3.739071956960323
task:
type: Classification
- dataset:
config: de
name: MTEB MassiveIntentClassification (de)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.1049092131809
- type: f1
value: 29.794603179718106
- type: f1_weighted
value: 30.137050786689766
- type: main_score
value: 32.1049092131809
task:
type: Classification
- dataset:
config: tr
name: MTEB MassiveIntentClassification (tr)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.562205783456626
- type: f1
value: 25.683266426146687
- type: f1_weighted
value: 25.803636686733057
- type: main_score
value: 27.562205783456626
task:
type: Classification
- dataset:
config: pt
name: MTEB MassiveIntentClassification (pt)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 34.347679892400805
- type: f1
value: 31.465774161046767
- type: f1_weighted
value: 31.735356981669327
- type: main_score
value: 34.347679892400805
task:
type: Classification
- dataset:
config: sq
name: MTEB MassiveIntentClassification (sq)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.38063214525891
- type: f1
value: 29.53168994128031
- type: f1_weighted
value: 30.112896935570273
- type: main_score
value: 32.38063214525891
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveIntentClassification (zh-TW)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 6.809011432414256
- type: f1
value: 5.205218706422693
- type: f1_weighted
value: 5.178287349465675
- type: main_score
value: 6.809011432414256
task:
type: Classification
- dataset:
config: hy
name: MTEB MassiveIntentClassification (hy)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.723604572965703
- type: f1
value: 0.6429150866665544
- type: f1_weighted
value: 0.9113227866994432
- type: main_score
value: 2.723604572965703
task:
type: Classification
- dataset:
config: da
name: MTEB MassiveIntentClassification (da)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.95427034297243
- type: f1
value: 32.204428726904936
- type: f1_weighted
value: 32.47064251083498
- type: main_score
value: 33.95427034297243
task:
type: Classification
- dataset:
config: af
name: MTEB MassiveIntentClassification (af)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.403496973772697
- type: f1
value: 27.814640020382342
- type: f1_weighted
value: 29.552471475522786
- type: main_score
value: 30.403496973772697
task:
type: Classification
- dataset:
config: ar
name: MTEB MassiveIntentClassification (ar)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.796234028244788
- type: f1
value: 2.4115955159178712
- type: f1_weighted
value: 2.9705530799117428
- type: main_score
value: 3.796234028244788
task:
type: Classification
- dataset:
config: jv
name: MTEB MassiveIntentClassification (jv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.533960995292528
- type: f1
value: 26.21221777741412
- type: f1_weighted
value: 27.072811075990217
- type: main_score
value: 28.533960995292528
task:
type: Classification
- dataset:
config: te
name: MTEB MassiveIntentClassification (te)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.2125084061869535
- type: f1
value: 1.0173733514352028
- type: f1_weighted
value: 1.316987953476142
- type: main_score
value: 2.2125084061869535
task:
type: Classification
- dataset:
config: tl
name: MTEB MassiveIntentClassification (tl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.017484868863484
- type: f1
value: 29.32295890060929
- type: f1_weighted
value: 29.657369574195414
- type: main_score
value: 32.017484868863484
task:
type: Classification
- dataset:
config: sw
name: MTEB MassiveIntentClassification (sw)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.790854068594484
- type: f1
value: 26.66461334490106
- type: f1_weighted
value: 26.3309301465354
- type: main_score
value: 27.790854068594484
task:
type: Classification
- dataset:
config: ja
name: MTEB MassiveIntentClassification (ja)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 5.611970410221924
- type: f1
value: 3.949675565526302
- type: f1_weighted
value: 3.8008532811790516
- type: main_score
value: 5.611970410221924
task:
type: Classification
- dataset:
config: ms
name: MTEB MassiveIntentClassification (ms)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.940820443846675
- type: f1
value: 26.913943613442726
- type: f1_weighted
value: 27.58112937211184
- type: main_score
value: 28.940820443846675
task:
type: Classification
- dataset:
config: nb
name: MTEB MassiveIntentClassification (nb)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.29993275050437
- type: f1
value: 30.38953729738546
- type: f1_weighted
value: 30.973971090234315
- type: main_score
value: 32.29993275050437
task:
type: Classification
- dataset:
config: fi
name: MTEB MassiveIntentClassification (fi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.13315400134499
- type: f1
value: 28.151659309577315
- type: f1_weighted
value: 28.919992380957805
- type: main_score
value: 31.13315400134499
task:
type: Classification
- dataset:
config: id
name: MTEB MassiveIntentClassification (id)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.56422326832549
- type: f1
value: 32.13999124730796
- type: f1_weighted
value: 31.821742347727334
- type: main_score
value: 33.56422326832549
task:
type: Classification
- dataset:
config: cy
name: MTEB MassiveIntentClassification (cy)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.68123739071957
- type: f1
value: 28.08132049625695
- type: f1_weighted
value: 30.136632177167293
- type: main_score
value: 31.68123739071957
task:
type: Classification
- dataset:
config: sl
name: MTEB MassiveIntentClassification (sl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.388702084734366
- type: f1
value: 30.06510634561652
- type: f1_weighted
value: 29.575793355168027
- type: main_score
value: 31.388702084734366
task:
type: Classification
- dataset:
config: es
name: MTEB MassiveIntentClassification (es)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.032279757901815
- type: f1
value: 30.20555955874916
- type: f1_weighted
value: 28.87618616461917
- type: main_score
value: 31.032279757901815
task:
type: Classification
- dataset:
config: bn
name: MTEB MassiveIntentClassification (bn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.0766644250168125
- type: f1
value: 1.1659097449170488
- type: f1_weighted
value: 1.6261385516847686
- type: main_score
value: 3.0766644250168125
task:
type: Classification
- dataset:
config: sv
name: MTEB MassiveIntentClassification (sv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.22864828513786
- type: f1
value: 29.514038012557155
- type: f1_weighted
value: 28.79006788550934
- type: main_score
value: 30.22864828513786
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveIntentClassification (ru)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 57.97915265635507
- type: f1
value: 56.5014953445001
- type: f1_weighted
value: 56.64147015986123
- type: main_score
value: 57.97915265635507
task:
type: Classification
- dataset:
config: az
name: MTEB MassiveIntentClassification (az)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.577673167451245
- type: f1
value: 23.44310534002699
- type: f1_weighted
value: 22.73388843513862
- type: main_score
value: 23.577673167451245
task:
type: Classification
- dataset:
config: it
name: MTEB MassiveIntentClassification (it)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 35.24209818426362
- type: f1
value: 34.17643389765681
- type: f1_weighted
value: 31.88705168526876
- type: main_score
value: 35.24209818426362
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.815736381977135
- type: f1
value: 23.59490629738082
- type: f1_weighted
value: 24.824019034766742
- type: main_score
value: 26.815736381977135
task:
type: Classification
- dataset:
config: vi
name: MTEB MassiveIntentClassification (vi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.71889710827169
- type: f1
value: 20.9474996841838
- type: f1_weighted
value: 21.8696712485011
- type: main_score
value: 23.71889710827169
task:
type: Classification
- dataset:
config: ta
name: MTEB MassiveIntentClassification (ta)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 1.4996637525218561
- type: f1
value: 0.3621176226135693
- type: f1_weighted
value: 0.40253328041710507
- type: main_score
value: 1.4996637525218561
task:
type: Classification
- dataset:
config: he
name: MTEB MassiveIntentClassification (he)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.2461331540013454
- type: f1
value: 0.590566331230622
- type: f1_weighted
value: 0.6162176049666722
- type: main_score
value: 2.2461331540013454
task:
type: Classification
- dataset:
config: nl
name: MTEB MassiveIntentClassification (nl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.43779421654338
- type: f1
value: 29.65516413448003
- type: f1_weighted
value: 30.056107103546008
- type: main_score
value: 32.43779421654338
task:
type: Classification
- dataset:
config: km
name: MTEB MassiveIntentClassification (km)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 5.137861466039005
- type: f1
value: 1.5034651435201778
- type: f1_weighted
value: 1.8580225168667703
- type: main_score
value: 5.137861466039005
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 5.15002459419577
- type: f1
value: 3.2849878732080238
- type: f1_weighted
value: 3.171516129361724
- type: main_score
value: 5.15002459419577
task:
type: Classification
- dataset:
config: ko
name: MTEB MassiveIntentClassification (ko)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.3610427939006393
- type: f1
value: 0.6344240632132025
- type: f1_weighted
value: 0.8741011326135733
- type: main_score
value: 2.3610427939006393
task:
type: Classification
- dataset:
config: hi
name: MTEB MassiveIntentClassification (hi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.4299065420560746
- type: f1
value: 1.1990062972384772
- type: f1_weighted
value: 1.2846405130538945
- type: main_score
value: 2.4299065420560746
task:
type: Classification
- dataset:
config: kn
name: MTEB MassiveIntentClassification (kn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.100344318740777
- type: f1
value: 1.0691096895187684
- type: f1_weighted
value: 1.0245515267986838
- type: main_score
value: 2.100344318740777
task:
type: Classification
- dataset:
config: ka
name: MTEB MassiveIntentClassification (ka)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.144613871126414
- type: f1
value: 0.38751721719666626
- type: f1_weighted
value: 0.5494302003085859
- type: main_score
value: 2.144613871126414
task:
type: Classification
- dataset:
config: am
name: MTEB MassiveIntentClassification (am)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.1347761928184945
- type: f1
value: 0.7186972868374003
- type: f1_weighted
value: 0.8692320111678621
- type: main_score
value: 2.1347761928184945
task:
type: Classification
- dataset:
config: my
name: MTEB MassiveIntentClassification (my)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.9744220363994094
- type: f1
value: 1.320159702083562
- type: f1_weighted
value: 1.6615339662178419
- type: main_score
value: 3.9744220363994094
task:
type: Classification
- dataset:
config: el
name: MTEB MassiveIntentClassification (el)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 8.740777176586326
- type: f1
value: 4.625508580628892
- type: f1_weighted
value: 5.910937912610004
- type: main_score
value: 8.740777176586326
task:
type: Classification
- dataset:
config: lv
name: MTEB MassiveIntentClassification (lv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 22.056074766355138
- type: f1
value: 20.067449871163735
- type: f1_weighted
value: 20.679581641637213
- type: main_score
value: 22.056074766355138
task:
type: Classification
- dataset:
config: ml
name: MTEB MassiveIntentClassification (ml)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.287260206591244
- type: f1
value: 0.5144479181790914
- type: f1_weighted
value: 0.7532382956194585
- type: main_score
value: 2.287260206591244
task:
type: Classification
- dataset:
config: mn
name: MTEB MassiveIntentClassification (mn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.514510575504183
- type: f1
value: 27.670683007330656
- type: f1_weighted
value: 26.797727875405965
- type: main_score
value: 28.514510575504183
task:
type: Classification
- dataset:
config: ur
name: MTEB MassiveIntentClassification (ur)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.5528775209050663
- type: f1
value: 1.5528439347982526
- type: f1_weighted
value: 1.59863069765228
- type: main_score
value: 2.5528775209050663
task:
type: Classification
- dataset:
config: fa
name: MTEB MassiveIntentClassification (fa)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.1578947368421053
- type: f1
value: 0.612147286970534
- type: f1_weighted
value: 0.9311100758788083
- type: main_score
value: 3.1578947368421053
task:
type: Classification
- dataset:
config: ro
name: MTEB MassiveIntentClassification (ro)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.472208558780135
- type: f1
value: 28.570236227937524
- type: f1_weighted
value: 29.26182782217857
- type: main_score
value: 30.472208558780135
task:
type: Classification
- dataset:
config: is
name: MTEB MassiveIntentClassification (is)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.12690605017216
- type: f1
value: 21.730073248467978
- type: f1_weighted
value: 23.3232094260056
- type: main_score
value: 24.12690605017216
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 50.6837186424004
- type: f1
value: 46.24633043195857
- type: f1_weighted
value: 49.89222156091109
- type: main_score
value: 50.6837186424004
task:
type: Classification
- dataset:
config: hu
name: MTEB MassiveIntentClassification (hu)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 24.869650762420065
- type: f1
value: 22.646829281311646
- type: f1_weighted
value: 23.75607068147335
- type: main_score
value: 24.869650762420065
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.83620265617314
- type: f1
value: 30.12388095110573
- type: f1_weighted
value: 29.755084946082466
- type: main_score
value: 30.83620265617314
task:
type: Classification
- dataset:
config: th
name: MTEB MassiveIntentClassification (th)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.7924249877029017
- type: f1
value: 1.3490081402255192
- type: f1_weighted
value: 1.1964792923823864
- type: main_score
value: 3.7924249877029017
task:
type: Classification
- dataset:
config: de
name: MTEB MassiveIntentClassification (de)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.85095917363502
- type: f1
value: 28.76898470499743
- type: f1_weighted
value: 29.742721084026552
- type: main_score
value: 30.85095917363502
task:
type: Classification
- dataset:
config: tr
name: MTEB MassiveIntentClassification (tr)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.22233152975898
- type: f1
value: 24.13532374526957
- type: f1_weighted
value: 24.801681753477833
- type: main_score
value: 26.22233152975898
task:
type: Classification
- dataset:
config: pt
name: MTEB MassiveIntentClassification (pt)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 33.85145105755042
- type: f1
value: 30.993852084910046
- type: f1_weighted
value: 31.47706557692265
- type: main_score
value: 33.85145105755042
task:
type: Classification
- dataset:
config: sq
name: MTEB MassiveIntentClassification (sq)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.69699950811608
- type: f1
value: 28.43551777754717
- type: f1_weighted
value: 29.35991647173387
- type: main_score
value: 31.69699950811608
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveIntentClassification (zh-TW)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 6.296114117068371
- type: f1
value: 4.469538815411268
- type: f1_weighted
value: 4.470912934534107
- type: main_score
value: 6.296114117068371
task:
type: Classification
- dataset:
config: hy
name: MTEB MassiveIntentClassification (hy)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.6660108214461387
- type: f1
value: 0.7095128645283928
- type: f1_weighted
value: 0.900359447084975
- type: main_score
value: 2.6660108214461387
task:
type: Classification
- dataset:
config: da
name: MTEB MassiveIntentClassification (da)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.24790949335957
- type: f1
value: 30.09602016401104
- type: f1_weighted
value: 31.27365296679004
- type: main_score
value: 32.24790949335957
task:
type: Classification
- dataset:
config: af
name: MTEB MassiveIntentClassification (af)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.85243482538121
- type: f1
value: 27.02898547703625
- type: f1_weighted
value: 29.19825733648402
- type: main_score
value: 29.85243482538121
task:
type: Classification
- dataset:
config: ar
name: MTEB MassiveIntentClassification (ar)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 3.413674372848008
- type: f1
value: 2.3814730307183596
- type: f1_weighted
value: 2.758592436005351
- type: main_score
value: 3.413674372848008
task:
type: Classification
- dataset:
config: jv
name: MTEB MassiveIntentClassification (jv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.59960649286769
- type: f1
value: 25.169829835887036
- type: f1_weighted
value: 26.378021821617065
- type: main_score
value: 27.59960649286769
task:
type: Classification
- dataset:
config: te
name: MTEB MassiveIntentClassification (te)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.0363994097393014
- type: f1
value: 0.7934004289138196
- type: f1_weighted
value: 1.1834679007875544
- type: main_score
value: 2.0363994097393014
task:
type: Classification
- dataset:
config: tl
name: MTEB MassiveIntentClassification (tl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.43630103295622
- type: f1
value: 28.28710817943075
- type: f1_weighted
value: 29.47693147061905
- type: main_score
value: 31.43630103295622
task:
type: Classification
- dataset:
config: sw
name: MTEB MassiveIntentClassification (sw)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 27.515986227250366
- type: f1
value: 25.65654395144761
- type: f1_weighted
value: 26.414094210360055
- type: main_score
value: 27.515986227250366
task:
type: Classification
- dataset:
config: ja
name: MTEB MassiveIntentClassification (ja)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 5.986227250368913
- type: f1
value: 3.9449730568824433
- type: f1_weighted
value: 3.8102259721047833
- type: main_score
value: 5.986227250368913
task:
type: Classification
- dataset:
config: ms
name: MTEB MassiveIntentClassification (ms)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 28.155435317265127
- type: f1
value: 25.708172487585202
- type: f1_weighted
value: 27.024916707588677
- type: main_score
value: 28.155435317265127
task:
type: Classification
- dataset:
config: nb
name: MTEB MassiveIntentClassification (nb)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 31.485489424495817
- type: f1
value: 29.47639008406045
- type: f1_weighted
value: 30.377692398014027
- type: main_score
value: 31.485489424495817
task:
type: Classification
- dataset:
config: fi
name: MTEB MassiveIntentClassification (fi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.403344810624695
- type: f1
value: 26.82843832763937
- type: f1_weighted
value: 28.11110907470959
- type: main_score
value: 30.403344810624695
task:
type: Classification
- dataset:
config: id
name: MTEB MassiveIntentClassification (id)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 32.70044269552386
- type: f1
value: 30.910774335551594
- type: f1_weighted
value: 31.371749140831422
- type: main_score
value: 32.70044269552386
task:
type: Classification
- dataset:
config: cy
name: MTEB MassiveIntentClassification (cy)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.429414658140686
- type: f1
value: 25.594886516936256
- type: f1_weighted
value: 28.392261199556877
- type: main_score
value: 29.429414658140686
task:
type: Classification
- dataset:
config: sl
name: MTEB MassiveIntentClassification (sl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.636005902606982
- type: f1
value: 28.287023938527234
- type: f1_weighted
value: 27.924913519954554
- type: main_score
value: 29.636005902606982
task:
type: Classification
- dataset:
config: es
name: MTEB MassiveIntentClassification (es)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.63453025086079
- type: f1
value: 29.5921601385162
- type: f1_weighted
value: 28.58410607526952
- type: main_score
value: 30.63453025086079
task:
type: Classification
- dataset:
config: bn
name: MTEB MassiveIntentClassification (bn)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.867683226758485
- type: f1
value: 1.0374630680286294
- type: f1_weighted
value: 1.3261691151267023
- type: main_score
value: 2.867683226758485
task:
type: Classification
- dataset:
config: sv
name: MTEB MassiveIntentClassification (sv)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.754058042302017
- type: f1
value: 27.921243093926957
- type: f1_weighted
value: 28.600526975101815
- type: main_score
value: 29.754058042302017
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveIntentClassification (ru)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 58.06197737333989
- type: f1
value: 53.92404816772661
- type: f1_weighted
value: 56.72057857737771
- type: main_score
value: 58.06197737333989
task:
type: Classification
- dataset:
config: az
name: MTEB MassiveIntentClassification (az)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 22.725036891293655
- type: f1
value: 22.05764593465915
- type: f1_weighted
value: 22.36326529771844
- type: main_score
value: 22.725036891293655
task:
type: Classification
- dataset:
config: it
name: MTEB MassiveIntentClassification (it)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 34.57943925233645
- type: f1
value: 33.54269802516337
- type: f1_weighted
value: 31.59380780190696
- type: main_score
value: 34.57943925233645
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 26.050172159370387
- type: f1
value: 23.37018289487783
- type: f1_weighted
value: 24.52891801190779
- type: main_score
value: 26.050172159370387
task:
type: Classification
- dataset:
config: vi
name: MTEB MassiveIntentClassification (vi)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 23.10378750614855
- type: f1
value: 19.634766811442688
- type: f1_weighted
value: 21.39922163237278
- type: main_score
value: 23.10378750614855
task:
type: Classification
- dataset:
config: ta
name: MTEB MassiveIntentClassification (ta)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 1.382193802262666
- type: f1
value: 0.2962201919122291
- type: f1_weighted
value: 0.36568543738308745
- type: main_score
value: 1.382193802262666
task:
type: Classification
- dataset:
config: he
name: MTEB MassiveIntentClassification (he)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 2.0560747663551404
- type: f1
value: 0.4742414282381403
- type: f1_weighted
value: 0.5861893507001308
- type: main_score
value: 2.0560747663551404
task:
type: Classification
- dataset:
config: nl
name: MTEB MassiveIntentClassification (nl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.5115592720118
- type: f1
value: 27.61045064110582
- type: f1_weighted
value: 28.987990654116114
- type: main_score
value: 30.5115592720118
task:
type: Classification
- dataset:
config: km
name: MTEB MassiveIntentClassification (km)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 4.377766847024103
- type: f1
value: 1.2676703377671132
- type: f1_weighted
value: 1.426174554035529
- type: main_score
value: 4.377766847024103
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 10.601882985877605
- type: f1
value: 6.8689500634035365
- type: f1_weighted
value: 8.260029142337519
- type: main_score
value: 10.601882985877605
task:
type: Classification
- dataset:
config: ko
name: MTEB MassiveScenarioClassification (ko)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 5.62542030934768
- type: f1
value: 1.9399090161521315
- type: f1_weighted
value: 1.7790298099358886
- type: main_score
value: 5.62542030934768
task:
type: Classification
- dataset:
config: hi
name: MTEB MassiveScenarioClassification (hi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.407531943510423
- type: f1
value: 3.622072056826428
- type: f1_weighted
value: 3.444172662951229
- type: main_score
value: 7.407531943510423
task:
type: Classification
- dataset:
config: kn
name: MTEB MassiveScenarioClassification (kn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.602555480833894
- type: f1
value: 3.9001734711485803
- type: f1_weighted
value: 3.4912256692008397
- type: main_score
value: 7.602555480833894
task:
type: Classification
- dataset:
config: ka
name: MTEB MassiveScenarioClassification (ka)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.010759919300605
- type: f1
value: 2.1485666974093878
- type: f1_weighted
value: 2.3739456428263477
- type: main_score
value: 7.010759919300605
task:
type: Classification
- dataset:
config: am
name: MTEB MassiveScenarioClassification (am)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.679892400806995
- type: f1
value: 2.728187383195907
- type: f1_weighted
value: 3.0454310752856353
- type: main_score
value: 7.679892400806995
task:
type: Classification
- dataset:
config: my
name: MTEB MassiveScenarioClassification (my)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 10.729657027572292
- type: f1
value: 4.138439669406968
- type: f1_weighted
value: 4.843092536146883
- type: main_score
value: 10.729657027572292
task:
type: Classification
- dataset:
config: el
name: MTEB MassiveScenarioClassification (el)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 17.952252858103563
- type: f1
value: 12.418135741505608
- type: f1_weighted
value: 15.228054842385186
- type: main_score
value: 17.952252858103563
task:
type: Classification
- dataset:
config: lv
name: MTEB MassiveScenarioClassification (lv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 29.29388029589779
- type: f1
value: 25.95638727776611
- type: f1_weighted
value: 27.82646328315652
- type: main_score
value: 29.29388029589779
task:
type: Classification
- dataset:
config: ml
name: MTEB MassiveScenarioClassification (ml)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.923335574983189
- type: f1
value: 2.2338102382542795
- type: f1_weighted
value: 2.837475945704109
- type: main_score
value: 6.923335574983189
task:
type: Classification
- dataset:
config: mn
name: MTEB MassiveScenarioClassification (mn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.70208473436449
- type: f1
value: 31.451013524608147
- type: f1_weighted
value: 33.4571016718763
- type: main_score
value: 33.70208473436449
task:
type: Classification
- dataset:
config: ur
name: MTEB MassiveScenarioClassification (ur)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.530598520511097
- type: f1
value: 3.993356806346034
- type: f1_weighted
value: 4.275297414153249
- type: main_score
value: 8.530598520511097
task:
type: Classification
- dataset:
config: fa
name: MTEB MassiveScenarioClassification (fa)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.6240753194351045
- type: f1
value: 2.559179690443991
- type: f1_weighted
value: 2.8775036329690353
- type: main_score
value: 6.6240753194351045
task:
type: Classification
- dataset:
config: ro
name: MTEB MassiveScenarioClassification (ro)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.01681237390719
- type: f1
value: 36.15548220887307
- type: f1_weighted
value: 38.91143847106075
- type: main_score
value: 40.01681237390719
task:
type: Classification
- dataset:
config: is
name: MTEB MassiveScenarioClassification (is)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.10356422326833
- type: f1
value: 29.87073203020746
- type: f1_weighted
value: 32.736926298821786
- type: main_score
value: 33.10356422326833
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 61.291190316072644
- type: f1
value: 58.09487277036398
- type: f1_weighted
value: 60.52223749579593
- type: main_score
value: 61.291190316072644
task:
type: Classification
- dataset:
config: hu
name: MTEB MassiveScenarioClassification (hu)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.40551445864156
- type: f1
value: 32.12815170334265
- type: f1_weighted
value: 35.421611675898745
- type: main_score
value: 36.40551445864156
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.90181573638198
- type: f1
value: 39.00450485042174
- type: f1_weighted
value: 41.74577968212385
- type: main_score
value: 42.90181573638198
task:
type: Classification
- dataset:
config: th
name: MTEB MassiveScenarioClassification (th)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.261600537995966
- type: f1
value: 3.8946817615361597
- type: f1_weighted
value: 3.7437491646031926
- type: main_score
value: 8.261600537995966
task:
type: Classification
- dataset:
config: de
name: MTEB MassiveScenarioClassification (de)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.07128446536651
- type: f1
value: 38.28996078984755
- type: f1_weighted
value: 41.04738811504033
- type: main_score
value: 42.07128446536651
task:
type: Classification
- dataset:
config: tr
name: MTEB MassiveScenarioClassification (tr)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.845326160053794
- type: f1
value: 32.52170618407094
- type: f1_weighted
value: 33.35658510579412
- type: main_score
value: 34.845326160053794
task:
type: Classification
- dataset:
config: pt
name: MTEB MassiveScenarioClassification (pt)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.78681909885676
- type: f1
value: 37.33575502776686
- type: f1_weighted
value: 38.66002021299529
- type: main_score
value: 40.78681909885676
task:
type: Classification
- dataset:
config: sq
name: MTEB MassiveScenarioClassification (sq)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.65635507733692
- type: f1
value: 38.53947437411434
- type: f1_weighted
value: 41.52520693995739
- type: main_score
value: 42.65635507733692
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveScenarioClassification (zh-TW)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.926698049764628
- type: f1
value: 8.724194514820493
- type: f1_weighted
value: 10.266244979280504
- type: main_score
value: 11.926698049764628
task:
type: Classification
- dataset:
config: hy
name: MTEB MassiveScenarioClassification (hy)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.779421654337593
- type: f1
value: 3.47659510611439
- type: f1_weighted
value: 4.092370736159162
- type: main_score
value: 8.779421654337593
task:
type: Classification
- dataset:
config: da
name: MTEB MassiveScenarioClassification (da)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 43.6852723604573
- type: f1
value: 39.338012150585094
- type: f1_weighted
value: 43.3756140521009
- type: main_score
value: 43.6852723604573
task:
type: Classification
- dataset:
config: af
name: MTEB MassiveScenarioClassification (af)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.83725622057835
- type: f1
value: 36.67993326074695
- type: f1_weighted
value: 40.73536387442413
- type: main_score
value: 40.83725622057835
task:
type: Classification
- dataset:
config: ar
name: MTEB MassiveScenarioClassification (ar)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.859448554135843
- type: f1
value: 6.502577103628851
- type: f1_weighted
value: 9.922384035467028
- type: main_score
value: 11.859448554135843
task:
type: Classification
- dataset:
config: jv
name: MTEB MassiveScenarioClassification (jv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 37.22932078009414
- type: f1
value: 34.37198836784653
- type: f1_weighted
value: 36.41682430619207
- type: main_score
value: 37.22932078009414
task:
type: Classification
- dataset:
config: te
name: MTEB MassiveScenarioClassification (te)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.909885675857431
- type: f1
value: 2.659712889039866
- type: f1_weighted
value: 3.315252295282912
- type: main_score
value: 6.909885675857431
task:
type: Classification
- dataset:
config: tl
name: MTEB MassiveScenarioClassification (tl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.157363819771355
- type: f1
value: 33.871383306341926
- type: f1_weighted
value: 37.16844466757229
- type: main_score
value: 38.157363819771355
task:
type: Classification
- dataset:
config: sw
name: MTEB MassiveScenarioClassification (sw)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.65904505716207
- type: f1
value: 32.95848641686319
- type: f1_weighted
value: 33.46347965861419
- type: main_score
value: 35.65904505716207
task:
type: Classification
- dataset:
config: ja
name: MTEB MassiveScenarioClassification (ja)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 10.601882985877605
- type: f1
value: 8.05499004226519
- type: f1_weighted
value: 8.12291817923475
- type: main_score
value: 10.601882985877605
task:
type: Classification
- dataset:
config: ms
name: MTEB MassiveScenarioClassification (ms)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.97108271687962
- type: f1
value: 34.19920488698337
- type: f1_weighted
value: 37.406365439450006
- type: main_score
value: 38.97108271687962
task:
type: Classification
- dataset:
config: nb
name: MTEB MassiveScenarioClassification (nb)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.04505716207128
- type: f1
value: 35.380977049887605
- type: f1_weighted
value: 38.79082603370826
- type: main_score
value: 39.04505716207128
task:
type: Classification
- dataset:
config: fi
name: MTEB MassiveScenarioClassification (fi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.18829858776059
- type: f1
value: 30.972699263943966
- type: f1_weighted
value: 34.66929745941575
- type: main_score
value: 35.18829858776059
task:
type: Classification
- dataset:
config: id
name: MTEB MassiveScenarioClassification (id)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.53934095494284
- type: f1
value: 37.19939485401421
- type: f1_weighted
value: 38.163540271879384
- type: main_score
value: 39.53934095494284
task:
type: Classification
- dataset:
config: cy
name: MTEB MassiveScenarioClassification (cy)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.85205110961668
- type: f1
value: 34.567211938088086
- type: f1_weighted
value: 38.93137139872493
- type: main_score
value: 39.85205110961668
task:
type: Classification
- dataset:
config: sl
name: MTEB MassiveScenarioClassification (sl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.978480161398785
- type: f1
value: 33.70493150778863
- type: f1_weighted
value: 34.89613180942136
- type: main_score
value: 35.978480161398785
task:
type: Classification
- dataset:
config: es
name: MTEB MassiveScenarioClassification (es)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 37.12508406186954
- type: f1
value: 34.14887874344704
- type: f1_weighted
value: 35.491336292250615
- type: main_score
value: 37.12508406186954
task:
type: Classification
- dataset:
config: bn
name: MTEB MassiveScenarioClassification (bn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.846671149966376
- type: f1
value: 3.772079613264656
- type: f1_weighted
value: 4.569880079881123
- type: main_score
value: 8.846671149966376
task:
type: Classification
- dataset:
config: sv
name: MTEB MassiveScenarioClassification (sv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.11970410221924
- type: f1
value: 33.64741825888341
- type: f1_weighted
value: 36.04738800166304
- type: main_score
value: 36.11970410221924
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveScenarioClassification (ru)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 62.89509078681911
- type: f1
value: 62.296937620668366
- type: f1_weighted
value: 61.50844245234364
- type: main_score
value: 62.89509078681911
task:
type: Classification
- dataset:
config: az
name: MTEB MassiveScenarioClassification (az)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 30.31607262945528
- type: f1
value: 27.373913596444382
- type: f1_weighted
value: 29.154743431705356
- type: main_score
value: 30.31607262945528
task:
type: Classification
- dataset:
config: it
name: MTEB MassiveScenarioClassification (it)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.68997982515131
- type: f1
value: 39.34921574451304
- type: f1_weighted
value: 41.39971354124732
- type: main_score
value: 42.68997982515131
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.62071284465367
- type: f1
value: 27.53427875798914
- type: f1_weighted
value: 30.442690748521006
- type: main_score
value: 31.62071284465367
task:
type: Classification
- dataset:
config: vi
name: MTEB MassiveScenarioClassification (vi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.889710827168795
- type: f1
value: 29.1527074423781
- type: f1_weighted
value: 29.84128781391531
- type: main_score
value: 31.889710827168795
task:
type: Classification
- dataset:
config: ta
name: MTEB MassiveScenarioClassification (ta)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.007397444519166
- type: f1
value: 1.763256752893296
- type: f1_weighted
value: 2.3996756522652913
- type: main_score
value: 7.007397444519166
task:
type: Classification
- dataset:
config: he
name: MTEB MassiveScenarioClassification (he)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.612642905178212
- type: f1
value: 2.0115132382174585
- type: f1_weighted
value: 2.8178938596974503
- type: main_score
value: 7.612642905178212
task:
type: Classification
- dataset:
config: nl
name: MTEB MassiveScenarioClassification (nl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.93813046402152
- type: f1
value: 35.475977992563635
- type: f1_weighted
value: 40.249098836834044
- type: main_score
value: 40.93813046402152
task:
type: Classification
- dataset:
config: km
name: MTEB MassiveScenarioClassification (km)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.510423671822462
- type: f1
value: 2.77822187113745
- type: f1_weighted
value: 3.488782507211019
- type: main_score
value: 8.510423671822462
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 10.560747663551401
- type: f1
value: 7.321692095226571
- type: f1_weighted
value: 8.136926309421098
- type: main_score
value: 10.560747663551401
task:
type: Classification
- dataset:
config: ko
name: MTEB MassiveScenarioClassification (ko)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 5.622233152975899
- type: f1
value: 1.7454943918873769
- type: f1_weighted
value: 1.5544580080510706
- type: main_score
value: 5.622233152975899
task:
type: Classification
- dataset:
config: hi
name: MTEB MassiveScenarioClassification (hi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.50614854894245
- type: f1
value: 3.671558894965337
- type: f1_weighted
value: 3.6075123924941224
- type: main_score
value: 7.50614854894245
task:
type: Classification
- dataset:
config: kn
name: MTEB MassiveScenarioClassification (kn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.047220855878013
- type: f1
value: 4.199596683728984
- type: f1_weighted
value: 3.705979981207572
- type: main_score
value: 8.047220855878013
task:
type: Classification
- dataset:
config: ka
name: MTEB MassiveScenarioClassification (ka)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.591244466305953
- type: f1
value: 1.9804826267181144
- type: f1_weighted
value: 2.1652032753558714
- type: main_score
value: 6.591244466305953
task:
type: Classification
- dataset:
config: am
name: MTEB MassiveScenarioClassification (am)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.511067388096411
- type: f1
value: 2.641163180255864
- type: f1_weighted
value: 3.03599461945174
- type: main_score
value: 7.511067388096411
task:
type: Classification
- dataset:
config: my
name: MTEB MassiveScenarioClassification (my)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.234628627643877
- type: f1
value: 4.53829675095688
- type: f1_weighted
value: 5.119828126415879
- type: main_score
value: 11.234628627643877
task:
type: Classification
- dataset:
config: el
name: MTEB MassiveScenarioClassification (el)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 16.438760452533202
- type: f1
value: 12.026293516540374
- type: f1_weighted
value: 13.40697491103347
- type: main_score
value: 16.438760452533202
task:
type: Classification
- dataset:
config: lv
name: MTEB MassiveScenarioClassification (lv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 28.470241023118547
- type: f1
value: 26.06308403577423
- type: f1_weighted
value: 26.913188635640108
- type: main_score
value: 28.470241023118547
task:
type: Classification
- dataset:
config: ml
name: MTEB MassiveScenarioClassification (ml)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.34874569601574
- type: f1
value: 2.163368202700301
- type: f1_weighted
value: 2.9794749471502735
- type: main_score
value: 7.34874569601574
task:
type: Classification
- dataset:
config: mn
name: MTEB MassiveScenarioClassification (mn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.482538121003444
- type: f1
value: 31.74224548475336
- type: f1_weighted
value: 32.974792871093996
- type: main_score
value: 33.482538121003444
task:
type: Classification
- dataset:
config: ur
name: MTEB MassiveScenarioClassification (ur)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.735858337432365
- type: f1
value: 4.387957216974412
- type: f1_weighted
value: 4.487011850573568
- type: main_score
value: 8.735858337432365
task:
type: Classification
- dataset:
config: fa
name: MTEB MassiveScenarioClassification (fa)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.8027545499262185
- type: f1
value: 2.724940339247371
- type: f1_weighted
value: 2.9191909608862248
- type: main_score
value: 6.8027545499262185
task:
type: Classification
- dataset:
config: ro
name: MTEB MassiveScenarioClassification (ro)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.77865223807182
- type: f1
value: 36.713842977439086
- type: f1_weighted
value: 38.411147363742614
- type: main_score
value: 39.77865223807182
task:
type: Classification
- dataset:
config: is
name: MTEB MassiveScenarioClassification (is)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 32.611903590752576
- type: f1
value: 30.478777350564933
- type: f1_weighted
value: 32.33376716992967
- type: main_score
value: 32.611903590752576
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 60.81652729955731
- type: f1
value: 57.85686645797947
- type: f1_weighted
value: 59.96336225413508
- type: main_score
value: 60.81652729955731
task:
type: Classification
- dataset:
config: hu
name: MTEB MassiveScenarioClassification (hu)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.041810132808656
- type: f1
value: 32.32895536298411
- type: f1_weighted
value: 34.08983039599136
- type: main_score
value: 35.041810132808656
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.4151500245942
- type: f1
value: 39.716877977971514
- type: f1_weighted
value: 40.98904556640093
- type: main_score
value: 42.4151500245942
task:
type: Classification
- dataset:
config: th
name: MTEB MassiveScenarioClassification (th)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.253812100344318
- type: f1
value: 4.2941598559113645
- type: f1_weighted
value: 3.7137986151126743
- type: main_score
value: 8.253812100344318
task:
type: Classification
- dataset:
config: de
name: MTEB MassiveScenarioClassification (de)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.65912444663059
- type: f1
value: 37.90162745459205
- type: f1_weighted
value: 39.942707376839756
- type: main_score
value: 40.65912444663059
task:
type: Classification
- dataset:
config: tr
name: MTEB MassiveScenarioClassification (tr)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 33.85145105755042
- type: f1
value: 32.41363211826809
- type: f1_weighted
value: 32.696811929693745
- type: main_score
value: 33.85145105755042
task:
type: Classification
- dataset:
config: pt
name: MTEB MassiveScenarioClassification (pt)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 40.22626660108214
- type: f1
value: 37.84448697275546
- type: f1_weighted
value: 37.82059370217246
- type: main_score
value: 40.22626660108214
task:
type: Classification
- dataset:
config: sq
name: MTEB MassiveScenarioClassification (sq)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 42.06591244466306
- type: f1
value: 38.76214747335659
- type: f1_weighted
value: 40.65484003509404
- type: main_score
value: 42.06591244466306
task:
type: Classification
- dataset:
config: zh-TW
name: MTEB MassiveScenarioClassification (zh-TW)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.682242990654206
- type: f1
value: 8.850699907144218
- type: f1_weighted
value: 9.655517346069553
- type: main_score
value: 11.682242990654206
task:
type: Classification
- dataset:
config: hy
name: MTEB MassiveScenarioClassification (hy)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.52926709296606
- type: f1
value: 3.4189589714301167
- type: f1_weighted
value: 3.894511154092698
- type: main_score
value: 8.52926709296606
task:
type: Classification
- dataset:
config: da
name: MTEB MassiveScenarioClassification (da)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 41.14117068371864
- type: f1
value: 38.08063702754415
- type: f1_weighted
value: 40.65305294882936
- type: main_score
value: 41.14117068371864
task:
type: Classification
- dataset:
config: af
name: MTEB MassiveScenarioClassification (af)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.3654697491392
- type: f1
value: 36.43369907401146
- type: f1_weighted
value: 39.09920883835431
- type: main_score
value: 39.3654697491392
task:
type: Classification
- dataset:
config: ar
name: MTEB MassiveScenarioClassification (ar)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.362518445646828
- type: f1
value: 6.2728348209099565
- type: f1_weighted
value: 8.903159425462325
- type: main_score
value: 11.362518445646828
task:
type: Classification
- dataset:
config: jv
name: MTEB MassiveScenarioClassification (jv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.246925725528776
- type: f1
value: 34.242775177193415
- type: f1_weighted
value: 34.90531238831363
- type: main_score
value: 36.246925725528776
task:
type: Classification
- dataset:
config: te
name: MTEB MassiveScenarioClassification (te)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 6.861780619773734
- type: f1
value: 2.7017710457799873
- type: f1_weighted
value: 3.1681349264113137
- type: main_score
value: 6.861780619773734
task:
type: Classification
- dataset:
config: tl
name: MTEB MassiveScenarioClassification (tl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.17019183472701
- type: f1
value: 34.777811838185485
- type: f1_weighted
value: 36.90042555420213
- type: main_score
value: 38.17019183472701
task:
type: Classification
- dataset:
config: sw
name: MTEB MassiveScenarioClassification (sw)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.32710280373832
- type: f1
value: 33.32826385073952
- type: f1_weighted
value: 33.388725291289916
- type: main_score
value: 35.32710280373832
task:
type: Classification
- dataset:
config: ja
name: MTEB MassiveScenarioClassification (ja)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 11.20511559272012
- type: f1
value: 8.976181412932425
- type: f1_weighted
value: 8.576498601594645
- type: main_score
value: 11.20511559272012
task:
type: Classification
- dataset:
config: ms
name: MTEB MassiveScenarioClassification (ms)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.85391047712739
- type: f1
value: 34.90571468739814
- type: f1_weighted
value: 36.82763280572209
- type: main_score
value: 38.85391047712739
task:
type: Classification
- dataset:
config: nb
name: MTEB MassiveScenarioClassification (nb)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.052139695031975
- type: f1
value: 35.272001887507564
- type: f1_weighted
value: 37.42041278303434
- type: main_score
value: 38.052139695031975
task:
type: Classification
- dataset:
config: fi
name: MTEB MassiveScenarioClassification (fi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 34.500737825873095
- type: f1
value: 30.68780970737908
- type: f1_weighted
value: 33.716051134823
- type: main_score
value: 34.500737825873095
task:
type: Classification
- dataset:
config: id
name: MTEB MassiveScenarioClassification (id)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.596655189375305
- type: f1
value: 37.72092200675893
- type: f1_weighted
value: 37.89234511492137
- type: main_score
value: 39.596655189375305
task:
type: Classification
- dataset:
config: cy
name: MTEB MassiveScenarioClassification (cy)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.93261190359076
- type: f1
value: 34.67593293977394
- type: f1_weighted
value: 37.58144266593478
- type: main_score
value: 38.93261190359076
task:
type: Classification
- dataset:
config: sl
name: MTEB MassiveScenarioClassification (sl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.336940482046245
- type: f1
value: 34.06391073492543
- type: f1_weighted
value: 34.19964460077873
- type: main_score
value: 35.336940482046245
task:
type: Classification
- dataset:
config: es
name: MTEB MassiveScenarioClassification (es)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 36.28135759960649
- type: f1
value: 33.98213113943637
- type: f1_weighted
value: 34.432683108706726
- type: main_score
value: 36.28135759960649
task:
type: Classification
- dataset:
config: bn
name: MTEB MassiveScenarioClassification (bn)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.789965568125922
- type: f1
value: 3.615951273986677
- type: f1_weighted
value: 4.543124755655086
- type: main_score
value: 8.789965568125922
task:
type: Classification
- dataset:
config: sv
name: MTEB MassiveScenarioClassification (sv)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 35.78947368421053
- type: f1
value: 33.641144471139874
- type: f1_weighted
value: 35.35509200878473
- type: main_score
value: 35.78947368421053
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveScenarioClassification (ru)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 64.14658140678799
- type: f1
value: 63.45318114952019
- type: f1_weighted
value: 62.837233214870004
- type: main_score
value: 64.14658140678799
task:
type: Classification
- dataset:
config: az
name: MTEB MassiveScenarioClassification (az)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 29.616330545991143
- type: f1
value: 27.89304924236733
- type: f1_weighted
value: 28.557344732597763
- type: main_score
value: 29.616330545991143
task:
type: Classification
- dataset:
config: it
name: MTEB MassiveScenarioClassification (it)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 41.1952779144122
- type: f1
value: 38.70295863724121
- type: f1_weighted
value: 39.8087264213271
- type: main_score
value: 41.1952779144122
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 30.15248401377275
- type: f1
value: 27.24749237955316
- type: f1_weighted
value: 29.24459561389263
- type: main_score
value: 30.15248401377275
task:
type: Classification
- dataset:
config: vi
name: MTEB MassiveScenarioClassification (vi)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 31.942941465814062
- type: f1
value: 29.238187005403976
- type: f1_weighted
value: 29.360530025850295
- type: main_score
value: 31.942941465814062
task:
type: Classification
- dataset:
config: ta
name: MTEB MassiveScenarioClassification (ta)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.211018199704869
- type: f1
value: 1.858123064629565
- type: f1_weighted
value: 2.531232017204237
- type: main_score
value: 7.211018199704869
task:
type: Classification
- dataset:
config: he
name: MTEB MassiveScenarioClassification (he)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 7.948844072798819
- type: f1
value: 2.1010859887190896
- type: f1_weighted
value: 3.0480176454133283
- type: main_score
value: 7.948844072798819
task:
type: Classification
- dataset:
config: nl
name: MTEB MassiveScenarioClassification (nl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.92277422528283
- type: f1
value: 35.488036321576146
- type: f1_weighted
value: 38.18536556200914
- type: main_score
value: 38.92277422528283
task:
type: Classification
- dataset:
config: km
name: MTEB MassiveScenarioClassification (km)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 8.150516478111165
- type: f1
value: 2.72691932389948
- type: f1_weighted
value: 3.3948665965609117
- type: main_score
value: 8.150516478111165
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P (default)
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: main_score
value: 20.786832589263845
- type: v_measure
value: 20.786832589263845
- type: v_measure_std
value: 1.6048001943974946
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S (default)
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: main_score
value: 18.181247067178756
- type: v_measure
value: 18.181247067178756
- type: v_measure_std
value: 1.5798786706707373
task:
type: Clustering
- dataset:
config: default
name: MTEB NYSJudicialEthicsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 45.20547945205479
- type: ap
value: 50.160551683623055
- type: ap_weighted
value: 50.160551683623055
- type: f1
value: 44.53941120607787
- type: f1_weighted
value: 44.28963561383653
- type: main_score
value: 45.20547945205479
task:
type: Classification
- dataset:
config: default
name: MTEB NewsClassification (default)
revision: eb185aade064a813bc0b7f42de02595523103ca4
split: test
type: fancyzhx/ag_news
metrics:
- type: accuracy
value: 73.78552631578948
- type: f1
value: 73.47724204580956
- type: f1_weighted
value: 73.47724204580956
- type: main_score
value: 73.78552631578948
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115DataRetentionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.31818181818183
- type: ap
value: 64.09705159705157
- type: ap_weighted
value: 64.09705159705157
- type: f1
value: 69.12280701754385
- type: f1_weighted
value: 69.12280701754386
- type: main_score
value: 69.31818181818183
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115DataSecurityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 63.868065967016484
- type: ap
value: 62.05622742346708
- type: ap_weighted
value: 62.05622742346708
- type: f1
value: 60.25914242202488
- type: f1_weighted
value: 60.22323273501004
- type: main_score
value: 63.868065967016484
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115DoNotTrackLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 88.18181818181819
- type: ap
value: 85.12727272727273
- type: ap_weighted
value: 85.12727272727273
- type: f1
value: 88.15734989648034
- type: f1_weighted
value: 88.15734989648034
- type: main_score
value: 88.18181818181819
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115FirstPartyCollectionUseLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.55896452540749
- type: ap
value: 64.53342029559877
- type: ap_weighted
value: 64.53342029559877
- type: f1
value: 69.32286869541191
- type: f1_weighted
value: 69.31770813082186
- type: main_score
value: 69.55896452540749
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115InternationalAndSpecificAudiencesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 77.75510204081633
- type: ap
value: 75.20843296586462
- type: ap_weighted
value: 75.20843296586462
- type: f1
value: 77.09799280479909
- type: f1_weighted
value: 77.11382676229348
- type: main_score
value: 77.75510204081633
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115PolicyChangeLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 89.0951276102088
- type: ap
value: 87.15879085780726
- type: ap_weighted
value: 87.15879085780726
- type: f1
value: 89.04203698995461
- type: f1_weighted
value: 89.04380667729642
- type: main_score
value: 89.0951276102088
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115ThirdPartySharingCollectionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 64.27672955974842
- type: ap
value: 62.893075413619535
- type: ap_weighted
value: 62.893075413619535
- type: f1
value: 60.459952085405675
- type: f1_weighted
value: 60.4135944642598
- type: main_score
value: 64.27672955974842
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115UserAccessEditAndDeletionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 67.09956709956711
- type: ap
value: 62.92853137890984
- type: ap_weighted
value: 62.92853137890984
- type: f1
value: 66.41414141414141
- type: f1_weighted
value: 66.39337093882548
- type: main_score
value: 67.09956709956711
task:
type: Classification
- dataset:
config: default
name: MTEB OPP115UserChoiceControlLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 70.69857697283311
- type: ap
value: 63.961545634799855
- type: ap_weighted
value: 63.961545634799855
- type: f1
value: 70.33565944829778
- type: f1_weighted
value: 70.34414874711732
- type: main_score
value: 70.69857697283311
task:
type: Classification
- dataset:
config: default
name: MTEB OralArgumentQuestionPurposeLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 20.51282051282051
- type: f1
value: 17.434477437885
- type: f1_weighted
value: 21.50138868825342
- type: main_score
value: 20.51282051282051
task:
type: Classification
- dataset:
config: default
name: MTEB OverrulingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.580078125
- type: ap
value: 64.66695246425695
- type: ap_weighted
value: 64.66695246425695
- type: f1
value: 69.55969170904413
- type: f1_weighted
value: 69.5473829295991
- type: main_score
value: 69.580078125
task:
type: Classification
- dataset:
config: default
name: MTEB PROALegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 49.47368421052632
- type: ap
value: 49.47368421052632
- type: ap_weighted
value: 49.47368421052632
- type: f1
value: 33.09859154929578
- type: f1_weighted
value: 32.750185322461085
- type: main_score
value: 49.47368421052632
task:
type: Classification
- dataset:
config: default
name: MTEB PatentClassification (default)
revision: 2f38a1dfdecfacee0184d74eaeafd3c0fb49d2a6
split: test
type: ccdv/patent-classification
metrics:
- type: accuracy
value: 29.306640625000004
- type: f1
value: 22.127646065227754
- type: f1_weighted
value: 26.66185625260182
- type: main_score
value: 29.306640625000004
task:
type: Classification
- dataset:
config: default
name: MTEB PersonalJurisdictionLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 51.99999999999999
- type: ap
value: 44.107526881720425
- type: ap_weighted
value: 44.107526881720425
- type: f1
value: 51.92307692307692
- type: f1_weighted
value: 51.61538461538463
- type: main_score
value: 51.99999999999999
task:
type: Classification
- dataset:
config: default
name: MTEB PoemSentimentClassification (default)
revision: 329d529d875a00c47ec71954a1a96ae167584770
split: test
type: google-research-datasets/poem_sentiment
metrics:
- type: accuracy
value: 35.96153846153845
- type: f1
value: 25.717059445124445
- type: f1_weighted
value: 42.39026561619051
- type: main_score
value: 35.96153846153845
task:
type: Classification
- dataset:
config: default
name: MTEB PoemSentimentClassification (default)
revision: 329d529d875a00c47ec71954a1a96ae167584770
split: validation
type: google-research-datasets/poem_sentiment
metrics:
- type: accuracy
value: 35.80952380952381
- type: f1
value: 26.76432080315997
- type: f1_weighted
value: 41.90402765909788
- type: main_score
value: 35.80952380952381
task:
type: Classification
- dataset:
config: default
name: MTEB RUParaPhraserSTS (default)
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
split: test
type: merionum/ru_paraphraser
metrics:
- type: cosine_pearson
value: 65.17293362215221
- type: cosine_spearman
value: 72.14872507255558
- type: euclidean_pearson
value: 69.39028550512482
- type: euclidean_spearman
value: 72.14872507255558
- type: main_score
value: 72.14872507255558
- type: manhattan_pearson
value: 69.30934614737492
- type: manhattan_spearman
value: 72.04933049290007
task:
type: STS
- dataset:
config: default
name: MTEB RedditClustering (default)
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: main_score
value: 26.275710753496597
- type: v_measure
value: 26.275710753496597
- type: v_measure_std
value: 4.029689555202136
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P (default)
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: main_score
value: 40.4828876757081
- type: v_measure
value: 40.4828876757081
- type: v_measure_std
value: 10.162859998011204
task:
type: Clustering
- dataset:
config: default
name: MTEB RiaNewsRetrieval (default)
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
split: test
type: ai-forever/ria-news-retrieval
metrics:
- type: main_score
value: 51.271
- type: map_at_1
value: 36.21
- type: map_at_10
value: 46.208
- type: map_at_100
value: 47.004000000000005
- type: map_at_1000
value: 47.044000000000004
- type: map_at_20
value: 46.693
- type: map_at_3
value: 43.669999999999995
- type: map_at_5
value: 45.196
- type: mrr_at_1
value: 36.22
- type: mrr_at_10
value: 46.21178571428571
- type: mrr_at_100
value: 47.007420014661236
- type: mrr_at_1000
value: 47.04734848842366
- type: mrr_at_20
value: 46.69688042104938
- type: mrr_at_3
value: 43.668333333333585
- type: mrr_at_5
value: 45.199833333333274
- type: nauc_map_at_1000_diff1
value: 46.94937854830209
- type: nauc_map_at_1000_max
value: 20.810031674720868
- type: nauc_map_at_1000_std
value: -2.8474964036416845
- type: nauc_map_at_100_diff1
value: 46.93710679472339
- type: nauc_map_at_100_max
value: 20.808355966268614
- type: nauc_map_at_100_std
value: -2.8341393346842607
- type: nauc_map_at_10_diff1
value: 46.85305633304179
- type: nauc_map_at_10_max
value: 20.74714400194472
- type: nauc_map_at_10_std
value: -3.0251519873045534
- type: nauc_map_at_1_diff1
value: 52.76907950247656
- type: nauc_map_at_1_max
value: 20.909404191190152
- type: nauc_map_at_1_std
value: -4.486212769404569
- type: nauc_map_at_20_diff1
value: 46.854283528399826
- type: nauc_map_at_20_max
value: 20.774565284237017
- type: nauc_map_at_20_std
value: -2.8952917224271846
- type: nauc_map_at_3_diff1
value: 47.6120187355803
- type: nauc_map_at_3_max
value: 20.94624350299643
- type: nauc_map_at_3_std
value: -3.5249841066101704
- type: nauc_map_at_5_diff1
value: 46.961741404854
- type: nauc_map_at_5_max
value: 20.84061893727113
- type: nauc_map_at_5_std
value: -3.2560895841762707
- type: nauc_mrr_at_1000_diff1
value: 46.94210158390746
- type: nauc_mrr_at_1000_max
value: 20.823017819566672
- type: nauc_mrr_at_1000_std
value: -2.873564388596409
- type: nauc_mrr_at_100_diff1
value: 46.92983853646228
- type: nauc_mrr_at_100_max
value: 20.821328345843625
- type: nauc_mrr_at_100_std
value: -2.860179131955564
- type: nauc_mrr_at_10_diff1
value: 46.845920501930316
- type: nauc_mrr_at_10_max
value: 20.760199941251056
- type: nauc_mrr_at_10_std
value: -3.0506119945281385
- type: nauc_mrr_at_1_diff1
value: 52.7384650230153
- type: nauc_mrr_at_1_max
value: 20.916918175962735
- type: nauc_mrr_at_1_std
value: -4.553119995428164
- type: nauc_mrr_at_20_diff1
value: 46.84707480256205
- type: nauc_mrr_at_20_max
value: 20.78745076885492
- type: nauc_mrr_at_20_std
value: -2.921144125415831
- type: nauc_mrr_at_3_diff1
value: 47.621438923503305
- type: nauc_mrr_at_3_max
value: 20.964983104645327
- type: nauc_mrr_at_3_std
value: -3.5359639119054154
- type: nauc_mrr_at_5_diff1
value: 46.95496065526142
- type: nauc_mrr_at_5_max
value: 20.85370692098222
- type: nauc_mrr_at_5_std
value: -3.2815901993324985
- type: nauc_ndcg_at_1000_diff1
value: 45.22512963946746
- type: nauc_ndcg_at_1000_max
value: 20.827437126737433
- type: nauc_ndcg_at_1000_std
value: -1.5970972641072643
- type: nauc_ndcg_at_100_diff1
value: 44.870296183306195
- type: nauc_ndcg_at_100_max
value: 20.734194655306457
- type: nauc_ndcg_at_100_std
value: -1.1285720744844427
- type: nauc_ndcg_at_10_diff1
value: 44.428914407493004
- type: nauc_ndcg_at_10_max
value: 20.440243514420057
- type: nauc_ndcg_at_10_std
value: -2.1210028369378167
- type: nauc_ndcg_at_1_diff1
value: 52.76907950247656
- type: nauc_ndcg_at_1_max
value: 20.909404191190152
- type: nauc_ndcg_at_1_std
value: -4.486212769404569
- type: nauc_ndcg_at_20_diff1
value: 44.333669717530185
- type: nauc_ndcg_at_20_max
value: 20.503130801298607
- type: nauc_ndcg_at_20_std
value: -1.6040287688898405
- type: nauc_ndcg_at_3_diff1
value: 45.988171772625634
- type: nauc_ndcg_at_3_max
value: 20.901834276482294
- type: nauc_ndcg_at_3_std
value: -3.228341348463241
- type: nauc_ndcg_at_5_diff1
value: 44.77257666022731
- type: nauc_ndcg_at_5_max
value: 20.70409124701764
- type: nauc_ndcg_at_5_std
value: -2.7157792836026826
- type: nauc_precision_at_1000_diff1
value: 24.715455802573878
- type: nauc_precision_at_1000_max
value: 25.642760620422127
- type: nauc_precision_at_1000_std
value: 20.124139669932596
- type: nauc_precision_at_100_diff1
value: 31.317204301075428
- type: nauc_precision_at_100_max
value: 20.717841497411385
- type: nauc_precision_at_100_std
value: 15.071826819138575
- type: nauc_precision_at_10_diff1
value: 35.455731038677605
- type: nauc_precision_at_10_max
value: 19.1279684555736
- type: nauc_precision_at_10_std
value: 1.47750077627525
- type: nauc_precision_at_1_diff1
value: 52.76907950247656
- type: nauc_precision_at_1_max
value: 20.909404191190152
- type: nauc_precision_at_1_std
value: -4.486212769404569
- type: nauc_precision_at_20_diff1
value: 33.12837939512509
- type: nauc_precision_at_20_max
value: 19.114872213547194
- type: nauc_precision_at_20_std
value: 4.913450374911581
- type: nauc_precision_at_3_diff1
value: 41.17113816710835
- type: nauc_precision_at_3_max
value: 20.751510760974718
- type: nauc_precision_at_3_std
value: -2.3503705806184496
- type: nauc_precision_at_5_diff1
value: 37.71917213552412
- type: nauc_precision_at_5_max
value: 20.221342669216565
- type: nauc_precision_at_5_std
value: -0.9301420941546075
- type: nauc_recall_at_1000_diff1
value: 24.715455802574407
- type: nauc_recall_at_1000_max
value: 25.64276062042252
- type: nauc_recall_at_1000_std
value: 20.124139669932728
- type: nauc_recall_at_100_diff1
value: 31.31720430107529
- type: nauc_recall_at_100_max
value: 20.717841497411516
- type: nauc_recall_at_100_std
value: 15.071826819138751
- type: nauc_recall_at_10_diff1
value: 35.455731038677655
- type: nauc_recall_at_10_max
value: 19.127968455573654
- type: nauc_recall_at_10_std
value: 1.47750077627532
- type: nauc_recall_at_1_diff1
value: 52.76907950247656
- type: nauc_recall_at_1_max
value: 20.909404191190152
- type: nauc_recall_at_1_std
value: -4.486212769404569
- type: nauc_recall_at_20_diff1
value: 33.12837939512524
- type: nauc_recall_at_20_max
value: 19.1148722135474
- type: nauc_recall_at_20_std
value: 4.91345037491176
- type: nauc_recall_at_3_diff1
value: 41.171138167108374
- type: nauc_recall_at_3_max
value: 20.751510760974682
- type: nauc_recall_at_3_std
value: -2.35037058061848
- type: nauc_recall_at_5_diff1
value: 37.71917213552414
- type: nauc_recall_at_5_max
value: 20.221342669216575
- type: nauc_recall_at_5_std
value: -0.9301420941545763
- type: ndcg_at_1
value: 36.21
- type: ndcg_at_10
value: 51.271
- type: ndcg_at_100
value: 55.289
- type: ndcg_at_1000
value: 56.401
- type: ndcg_at_20
value: 53.028
- type: ndcg_at_3
value: 46.078
- type: ndcg_at_5
value: 48.825
- type: precision_at_1
value: 36.21
- type: precision_at_10
value: 6.7250000000000005
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.095
- type: precision_at_20
value: 3.7089999999999996
- type: precision_at_3
value: 17.68
- type: precision_at_5
value: 11.940000000000001
- type: recall_at_1
value: 36.21
- type: recall_at_10
value: 67.25
- type: recall_at_100
value: 86.4
- type: recall_at_1000
value: 95.26
- type: recall_at_20
value: 74.18
- type: recall_at_3
value: 53.04
- type: recall_at_5
value: 59.699999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuBQReranking (default)
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
split: test
type: ai-forever/rubq-reranking
metrics:
- type: main_score
value: 62.15027154459556
- type: map
value: 62.15027154459556
- type: mrr
value: 68.09500782905037
- type: nAUC_map_diff1
value: 33.062970148901556
- type: nAUC_map_max
value: 11.090302786599219
- type: nAUC_map_std
value: 5.660375803457896
- type: nAUC_mrr_diff1
value: 35.578332777596685
- type: nAUC_mrr_max
value: 14.981311816105839
- type: nAUC_mrr_std
value: 5.550039824115788
task:
type: Reranking
- dataset:
config: default
name: MTEB RuBQRetrieval (default)
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
split: test
type: ai-forever/rubq-retrieval
metrics:
- type: main_score
value: 51.734
- type: map_at_1
value: 28.510999999999996
- type: map_at_10
value: 43.631
- type: map_at_100
value: 44.988
- type: map_at_1000
value: 45.052
- type: map_at_20
value: 44.462
- type: map_at_3
value: 38.937
- type: map_at_5
value: 41.833
- type: mrr_at_1
value: 41.312056737588655
- type: mrr_at_10
value: 53.36138316634781
- type: mrr_at_100
value: 53.949276632310216
- type: mrr_at_1000
value: 53.97463197704906
- type: mrr_at_20
value: 53.72140863635181
- type: mrr_at_3
value: 50.43341213553989
- type: mrr_at_5
value: 52.32466509062269
- type: nauc_map_at_1000_diff1
value: 28.763838953386795
- type: nauc_map_at_1000_max
value: 24.058720207454833
- type: nauc_map_at_1000_std
value: 0.43914028345667794
- type: nauc_map_at_100_diff1
value: 28.74115734128027
- type: nauc_map_at_100_max
value: 24.067201633751907
- type: nauc_map_at_100_std
value: 0.48479657643151175
- type: nauc_map_at_10_diff1
value: 28.78055585777882
- type: nauc_map_at_10_max
value: 23.660824446842014
- type: nauc_map_at_10_std
value: -0.13417257945838412
- type: nauc_map_at_1_diff1
value: 31.726698171475988
- type: nauc_map_at_1_max
value: 18.706684051084675
- type: nauc_map_at_1_std
value: -3.1112088462944576
- type: nauc_map_at_20_diff1
value: 28.821888050893524
- type: nauc_map_at_20_max
value: 24.054108877450066
- type: nauc_map_at_20_std
value: 0.29933097295171895
- type: nauc_map_at_3_diff1
value: 29.414059668041187
- type: nauc_map_at_3_max
value: 21.603288627966425
- type: nauc_map_at_3_std
value: -1.2582454726026868
- type: nauc_map_at_5_diff1
value: 28.763709067820066
- type: nauc_map_at_5_max
value: 22.83472652858084
- type: nauc_map_at_5_std
value: -0.9139576784503077
- type: nauc_mrr_at_1000_diff1
value: 32.788260400997885
- type: nauc_mrr_at_1000_max
value: 26.645815716166126
- type: nauc_mrr_at_1000_std
value: -1.751195655856463
- type: nauc_mrr_at_100_diff1
value: 32.77886459571929
- type: nauc_mrr_at_100_max
value: 26.65637126850806
- type: nauc_mrr_at_100_std
value: -1.7267980184678584
- type: nauc_mrr_at_10_diff1
value: 32.78874216502045
- type: nauc_mrr_at_10_max
value: 26.4839655119896
- type: nauc_mrr_at_10_std
value: -1.9790149014956449
- type: nauc_mrr_at_1_diff1
value: 35.13232635364635
- type: nauc_mrr_at_1_max
value: 23.697653866746013
- type: nauc_mrr_at_1_std
value: -3.229619940147812
- type: nauc_mrr_at_20_diff1
value: 32.77802354989702
- type: nauc_mrr_at_20_max
value: 26.68040225454969
- type: nauc_mrr_at_20_std
value: -1.75616956975016
- type: nauc_mrr_at_3_diff1
value: 32.984816761600435
- type: nauc_mrr_at_3_max
value: 26.13901825373233
- type: nauc_mrr_at_3_std
value: -2.52193076369521
- type: nauc_mrr_at_5_diff1
value: 32.84967841683121
- type: nauc_mrr_at_5_max
value: 26.529547373322448
- type: nauc_mrr_at_5_std
value: -2.5581887401849595
- type: nauc_ndcg_at_1000_diff1
value: 28.596338371171104
- type: nauc_ndcg_at_1000_max
value: 26.398864343527546
- type: nauc_ndcg_at_1000_std
value: 2.0928142009674264
- type: nauc_ndcg_at_100_diff1
value: 28.25901263389625
- type: nauc_ndcg_at_100_max
value: 26.93052809711281
- type: nauc_ndcg_at_100_std
value: 3.1368035623322266
- type: nauc_ndcg_at_10_diff1
value: 28.273504061219295
- type: nauc_ndcg_at_10_max
value: 25.70274506672966
- type: nauc_ndcg_at_10_std
value: 1.031980357515916
- type: nauc_ndcg_at_1_diff1
value: 35.288927336386486
- type: nauc_ndcg_at_1_max
value: 23.407964640774143
- type: nauc_ndcg_at_1_std
value: -3.2088824424845743
- type: nauc_ndcg_at_20_diff1
value: 28.27252389476242
- type: nauc_ndcg_at_20_max
value: 26.959280568356686
- type: nauc_ndcg_at_20_std
value: 2.355748254409649
- type: nauc_ndcg_at_3_diff1
value: 29.507109145825144
- type: nauc_ndcg_at_3_max
value: 23.171704666301913
- type: nauc_ndcg_at_3_std
value: -1.4521550440778286
- type: nauc_ndcg_at_5_diff1
value: 28.488416363267216
- type: nauc_ndcg_at_5_max
value: 24.63470555569984
- type: nauc_ndcg_at_5_std
value: -0.9243408985702865
- type: nauc_precision_at_1000_diff1
value: -1.6853041487515183
- type: nauc_precision_at_1000_max
value: 7.960967030916032
- type: nauc_precision_at_1000_std
value: 3.6491508412352784
- type: nauc_precision_at_100_diff1
value: 1.1138125936003078
- type: nauc_precision_at_100_max
value: 14.425287491557784
- type: nauc_precision_at_100_std
value: 8.976522577047673
- type: nauc_precision_at_10_diff1
value: 9.746060862351767
- type: nauc_precision_at_10_max
value: 21.23608774117671
- type: nauc_precision_at_10_std
value: 5.704741335087523
- type: nauc_precision_at_1_diff1
value: 35.288927336386486
- type: nauc_precision_at_1_max
value: 23.407964640774143
- type: nauc_precision_at_1_std
value: -3.2088824424845743
- type: nauc_precision_at_20_diff1
value: 6.326610022834949
- type: nauc_precision_at_20_max
value: 20.35842844947274
- type: nauc_precision_at_20_std
value: 8.561077634074318
- type: nauc_precision_at_3_diff1
value: 20.23921207457269
- type: nauc_precision_at_3_max
value: 22.983126702497753
- type: nauc_precision_at_3_std
value: 0.3762065769613514
- type: nauc_precision_at_5_diff1
value: 14.130374029335451
- type: nauc_precision_at_5_max
value: 22.27280203101339
- type: nauc_precision_at_5_std
value: 1.4403304333986182
- type: nauc_recall_at_1000_diff1
value: 5.336939388003354
- type: nauc_recall_at_1000_max
value: 31.706880957377347
- type: nauc_recall_at_1000_std
value: 34.42854130495
- type: nauc_recall_at_100_diff1
value: 13.06348098921675
- type: nauc_recall_at_100_max
value: 35.43003105581946
- type: nauc_recall_at_100_std
value: 28.949432461425634
- type: nauc_recall_at_10_diff1
value: 19.58510835348359
- type: nauc_recall_at_10_max
value: 25.98205980928563
- type: nauc_recall_at_10_std
value: 6.643640648680416
- type: nauc_recall_at_1_diff1
value: 31.726698171475988
- type: nauc_recall_at_1_max
value: 18.706684051084675
- type: nauc_recall_at_1_std
value: -3.1112088462944576
- type: nauc_recall_at_20_diff1
value: 17.50381042355996
- type: nauc_recall_at_20_max
value: 31.185904487900324
- type: nauc_recall_at_20_std
value: 13.510200942211565
- type: nauc_recall_at_3_diff1
value: 24.227382984516147
- type: nauc_recall_at_3_max
value: 21.40248626451014
- type: nauc_recall_at_3_std
value: -0.469137375497106
- type: nauc_recall_at_5_diff1
value: 21.25980638967181
- type: nauc_recall_at_5_max
value: 23.853364661344404
- type: nauc_recall_at_5_std
value: 0.7407724495151051
- type: ndcg_at_1
value: 41.253
- type: ndcg_at_10
value: 51.734
- type: ndcg_at_100
value: 56.796
- type: ndcg_at_1000
value: 58.044
- type: ndcg_at_20
value: 53.982
- type: ndcg_at_3
value: 44.448
- type: ndcg_at_5
value: 48.306
- type: precision_at_1
value: 41.253
- type: precision_at_10
value: 10.674
- type: precision_at_100
value: 1.437
- type: precision_at_1000
value: 0.159
- type: precision_at_20
value: 6.0280000000000005
- type: precision_at_3
value: 24.901
- type: precision_at_5
value: 18.038
- type: recall_at_1
value: 28.510999999999996
- type: recall_at_10
value: 65.646
- type: recall_at_100
value: 86.37
- type: recall_at_1000
value: 94.926
- type: recall_at_20
value: 73.236
- type: recall_at_3
value: 47.492000000000004
- type: recall_at_5
value: 56.552
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuReviewsClassification (default)
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
split: test
type: ai-forever/ru-reviews-classification
metrics:
- type: accuracy
value: 60.6591796875
- type: f1
value: 60.34177974754267
- type: f1_weighted
value: 60.3424791407144
- type: main_score
value: 60.6591796875
task:
type: Classification
- dataset:
config: default
name: MTEB RuSTSBenchmarkSTS (default)
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
split: test
type: ai-forever/ru-stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 78.67181755069355
- type: cosine_spearman
value: 78.48157070388886
- type: euclidean_pearson
value: 78.16400243944963
- type: euclidean_spearman
value: 78.48124817526005
- type: main_score
value: 78.48157070388886
- type: manhattan_pearson
value: 78.04437263885238
- type: manhattan_spearman
value: 78.34292373482941
task:
type: STS
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClassification (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: accuracy
value: 52.9296875
- type: f1
value: 51.36892216551846
- type: f1_weighted
value: 51.38263945115431
- type: main_score
value: 52.9296875
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: main_score
value: 47.548401486969844
- type: v_measure
value: 47.548401486969844
- type: v_measure_std
value: 0.9652047055316595
task:
type: Clustering
- dataset:
config: default
name: MTEB RuSciBenchOECDClassification (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: accuracy
value: 40.7861328125
- type: f1
value: 38.417161317304625
- type: f1_weighted
value: 38.41751508417981
- type: main_score
value: 40.7861328125
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchOECDClusteringP2P (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: main_score
value: 41.44039335680795
- type: v_measure
value: 41.44039335680795
- type: v_measure_std
value: 1.2447867997057736
task:
type: Clustering
- dataset:
config: default
name: MTEB SCDBPAccountabilityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 64.64379947229551
- type: ap
value: 91.77095548714944
- type: ap_weighted
value: 91.77095548714944
- type: f1
value: 56.37541231445849
- type: f1_weighted
value: 70.25628045216064
- type: main_score
value: 64.64379947229551
task:
type: Classification
- dataset:
config: default
name: MTEB SCDBPAuditsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 59.89445910290237
- type: ap
value: 75.9408508806894
- type: ap_weighted
value: 75.9408508806894
- type: f1
value: 59.26805814808528
- type: f1_weighted
value: 61.147261012536525
- type: main_score
value: 59.89445910290237
task:
type: Classification
- dataset:
config: default
name: MTEB SCDBPCertificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 59.78835978835979
- type: ap
value: 79.40365504574285
- type: ap_weighted
value: 79.40365504574285
- type: f1
value: 56.06802055297283
- type: f1_weighted
value: 62.49406105045939
- type: main_score
value: 59.78835978835979
task:
type: Classification
- dataset:
config: default
name: MTEB SCDBPTrainingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 59.102902374670194
- type: ap
value: 78.86277214171828
- type: ap_weighted
value: 78.86277214171828
- type: f1
value: 58.122144043570934
- type: f1_weighted
value: 60.91223239928431
- type: main_score
value: 59.102902374670194
task:
type: Classification
- dataset:
config: default
name: MTEB SCDBPVerificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 62.796833773087066
- type: ap
value: 66.09764646131225
- type: ap_weighted
value: 66.09764646131225
- type: f1
value: 62.562263119916494
- type: f1_weighted
value: 62.19476909661592
- type: main_score
value: 62.796833773087066
task:
type: Classification
- dataset:
config: default
name: MTEB SCDDAccountabilityLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 60.84656084656085
- type: ap
value: 96.40608145845859
- type: ap_weighted
value: 96.40608145845859
- type: f1
value: 46.04166666666668
- type: f1_weighted
value: 71.16512345679011
- type: main_score
value: 60.84656084656085
task:
type: Classification
- dataset:
config: default
name: MTEB SCDDAuditsLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 61.741424802110814
- type: ap
value: 94.08312772646127
- type: ap_weighted
value: 94.08312772646127
- type: f1
value: 50.59825064499599
- type: f1_weighted
value: 69.72736628137642
- type: main_score
value: 61.741424802110814
task:
type: Classification
- dataset:
config: default
name: MTEB SCDDCertificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 62.43386243386243
- type: ap
value: 92.94462068443907
- type: ap_weighted
value: 92.94462068443907
- type: f1
value: 49.37181663837012
- type: f1_weighted
value: 70.32551510197236
- type: main_score
value: 62.43386243386243
task:
type: Classification
- dataset:
config: default
name: MTEB SCDDTrainingLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 53.825857519788926
- type: ap
value: 89.02073335965477
- type: ap_weighted
value: 89.02073335965477
- type: f1
value: 47.22918407128933
- type: f1_weighted
value: 60.86559112527728
- type: main_score
value: 53.825857519788926
task:
type: Classification
- dataset:
config: default
name: MTEB SCDDVerificationLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 49.07651715039577
- type: ap
value: 76.04960744098202
- type: ap_weighted
value: 76.04960744098202
- type: f1
value: 47.939930963310914
- type: f1_weighted
value: 51.65413225324895
- type: main_score
value: 49.07651715039577
task:
type: Classification
- dataset:
config: zh
name: MTEB STS22 (zh)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 10.783707479640047
- type: cosine_spearman
value: 32.82859566062349
- type: euclidean_pearson
value: 21.280811252412548
- type: euclidean_spearman
value: 32.82859566062349
- type: main_score
value: 32.82859566062349
- type: manhattan_pearson
value: 21.510100649883686
- type: manhattan_spearman
value: 32.924353350152195
task:
type: STS
- dataset:
config: de-fr
name: MTEB STS22 (de-fr)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 10.185699265034293
- type: cosine_spearman
value: 17.504453225721367
- type: euclidean_pearson
value: 11.256743769494715
- type: euclidean_spearman
value: 17.504453225721367
- type: main_score
value: 17.504453225721367
- type: manhattan_pearson
value: 9.741426548627869
- type: manhattan_spearman
value: 16.976476678309815
task:
type: STS
- dataset:
config: pl-en
name: MTEB STS22 (pl-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 44.8697112464095
- type: cosine_spearman
value: 42.075721562892944
- type: euclidean_pearson
value: 43.40637455102888
- type: euclidean_spearman
value: 42.075721562892944
- type: main_score
value: 42.075721562892944
- type: manhattan_pearson
value: 45.13522626066653
- type: manhattan_spearman
value: 42.53935152687679
task:
type: STS
- dataset:
config: ru
name: MTEB STS22 (ru)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 51.4108131114559
- type: cosine_spearman
value: 60.05716921675363
- type: euclidean_pearson
value: 52.595208834301246
- type: euclidean_spearman
value: 60.05157835366835
- type: main_score
value: 60.05716921675363
- type: manhattan_pearson
value: 52.49640999228367
- type: manhattan_spearman
value: 59.89412865698913
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 26.610436064600535
- type: cosine_spearman
value: 42.00247648193326
- type: euclidean_pearson
value: 33.894760545223065
- type: euclidean_spearman
value: 42.00247648193326
- type: main_score
value: 42.00247648193326
- type: manhattan_pearson
value: 33.80795212984925
- type: manhattan_spearman
value: 42.14922985413102
task:
type: STS
- dataset:
config: de
name: MTEB STS22 (de)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: -5.737945045891398
- type: cosine_spearman
value: 8.163885149544491
- type: euclidean_pearson
value: -2.214478704390943
- type: euclidean_spearman
value: 8.16472976205313
- type: main_score
value: 8.163885149544491
- type: manhattan_pearson
value: -1.7539096573944195
- type: manhattan_spearman
value: 8.6906872178124
task:
type: STS
- dataset:
config: tr
name: MTEB STS22 (tr)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 2.043942714330888
- type: cosine_spearman
value: 15.459553758272923
- type: euclidean_pearson
value: 8.816942314411607
- type: euclidean_spearman
value: 15.459553758272923
- type: main_score
value: 15.459553758272923
- type: manhattan_pearson
value: 9.32963790399984
- type: manhattan_spearman
value: 15.7857074615967
task:
type: STS
- dataset:
config: de-en
name: MTEB STS22 (de-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 17.695301514955418
- type: cosine_spearman
value: 21.545599222945675
- type: euclidean_pearson
value: 18.353827841283753
- type: euclidean_spearman
value: 21.545599222945675
- type: main_score
value: 21.545599222945675
- type: manhattan_pearson
value: 17.009036963688505
- type: manhattan_spearman
value: 20.508582325360287
task:
type: STS
- dataset:
config: it
name: MTEB STS22 (it)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 32.630588839696415
- type: cosine_spearman
value: 39.69250140711604
- type: euclidean_pearson
value: 37.54122176804933
- type: euclidean_spearman
value: 39.69250140711604
- type: main_score
value: 39.69250140711604
- type: manhattan_pearson
value: 37.79703600372667
- type: manhattan_spearman
value: 39.742229485575024
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 0.3113685198259237
- type: cosine_spearman
value: 9.707385637292596
- type: euclidean_pearson
value: -2.4832855952463206
- type: euclidean_spearman
value: 9.80177503118972
- type: main_score
value: 9.707385637292596
- type: manhattan_pearson
value: -2.325293004138977
- type: manhattan_spearman
value: 10.060452403624826
task:
type: STS
- dataset:
config: fr-pl
name: MTEB STS22 (fr-pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 47.546556133158575
- type: cosine_spearman
value: 39.440531887330785
- type: euclidean_pearson
value: 48.2920143634797
- type: euclidean_spearman
value: 39.440531887330785
- type: main_score
value: 39.440531887330785
- type: manhattan_pearson
value: 45.769523538925824
- type: manhattan_spearman
value: 50.709255283710995
task:
type: STS
- dataset:
config: de-pl
name: MTEB STS22 (de-pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 0.33007020080694816
- type: cosine_spearman
value: 25.52831180119127
- type: euclidean_pearson
value: 5.7124033000823164
- type: euclidean_spearman
value: 25.52831180119127
- type: main_score
value: 25.52831180119127
- type: manhattan_pearson
value: 5.62314566860622
- type: manhattan_spearman
value: 23.83463610871175
task:
type: STS
- dataset:
config: ar
name: MTEB STS22 (ar)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 22.766025640460693
- type: cosine_spearman
value: 27.950069575571522
- type: euclidean_pearson
value: 26.551723755491363
- type: euclidean_spearman
value: 27.939678639817668
- type: main_score
value: 27.950069575571522
- type: manhattan_pearson
value: 26.681060475093854
- type: manhattan_spearman
value: 27.986878582632468
task:
type: STS
- dataset:
config: es-en
name: MTEB STS22 (es-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 38.597910358452815
- type: cosine_spearman
value: 42.766194189894094
- type: euclidean_pearson
value: 39.991306255692045
- type: euclidean_spearman
value: 42.766194189894094
- type: main_score
value: 42.766194189894094
- type: manhattan_pearson
value: 39.74918349185897
- type: manhattan_spearman
value: 42.574140880355976
task:
type: STS
- dataset:
config: es-it
name: MTEB STS22 (es-it)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 31.245905627830638
- type: cosine_spearman
value: 32.83240215980029
- type: euclidean_pearson
value: 33.06481984956772
- type: euclidean_spearman
value: 32.83240215980029
- type: main_score
value: 32.83240215980029
- type: manhattan_pearson
value: 32.75706899386791
- type: manhattan_spearman
value: 32.334081823391806
task:
type: STS
- dataset:
config: es
name: MTEB STS22 (es)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 16.966347433363
- type: cosine_spearman
value: 45.3129129914676
- type: euclidean_pearson
value: 28.50940505249936
- type: euclidean_spearman
value: 45.3129129914676
- type: main_score
value: 45.3129129914676
- type: manhattan_pearson
value: 28.314847203862147
- type: manhattan_spearman
value: 45.72042962859271
task:
type: STS
- dataset:
config: zh-en
name: MTEB STS22 (zh-en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 34.66358594216254
- type: cosine_spearman
value: 31.24659955360722
- type: euclidean_pearson
value: 34.878197534840744
- type: euclidean_spearman
value: 31.24659955360722
- type: main_score
value: 31.24659955360722
- type: manhattan_pearson
value: 34.70743093532992
- type: manhattan_spearman
value: 30.441251812127955
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 41.376318618780324
- type: cosine_spearman
value: 47.061970299820764
- type: euclidean_pearson
value: 44.89590651276241
- type: euclidean_spearman
value: 47.061970299820764
- type: main_score
value: 47.061970299820764
- type: manhattan_pearson
value: 44.780089700405576
- type: manhattan_spearman
value: 46.742447019531525
task:
type: STS
- dataset:
config: default
name: MTEB SensitiveTopicsClassification (default)
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
split: test
type: ai-forever/sensitive-topics-classification
metrics:
- type: accuracy
value: 24.443359375
- type: f1
value: 21.903258801323084
- type: lrap
value: 36.34758843315896
- type: main_score
value: 24.443359375
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB StackExchangeClustering (default)
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: main_score
value: 33.50613168016603
- type: v_measure
value: 33.50613168016603
- type: v_measure_std
value: 3.91782276122889
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P (default)
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: main_score
value: 27.98150942889309
- type: v_measure
value: 27.98150942889309
- type: v_measure_std
value: 2.0056109104136226
task:
type: Clustering
- dataset:
config: default
name: MTEB TERRa (default)
revision: 7b58f24536063837d644aab9a023c62199b2a612
split: dev
type: ai-forever/terra-pairclassification
metrics:
- type: cosine_accuracy
value: 59.60912052117264
- type: cosine_accuracy_threshold
value: 81.55556917190552
- type: cosine_ap
value: 56.08760299515377
- type: cosine_f1
value: 67.33167082294264
- type: cosine_f1_threshold
value: 78.14505100250244
- type: cosine_precision
value: 54.43548387096774
- type: cosine_recall
value: 88.23529411764706
- type: dot_accuracy
value: 59.60912052117264
- type: dot_accuracy_threshold
value: 81.55556917190552
- type: dot_ap
value: 56.08760299515377
- type: dot_f1
value: 67.33167082294264
- type: dot_f1_threshold
value: 78.14503908157349
- type: dot_precision
value: 54.43548387096774
- type: dot_recall
value: 88.23529411764706
- type: euclidean_accuracy
value: 59.60912052117264
- type: euclidean_accuracy_threshold
value: 60.736143589019775
- type: euclidean_ap
value: 56.08760299515377
- type: euclidean_f1
value: 67.33167082294264
- type: euclidean_f1_threshold
value: 66.11342430114746
- type: euclidean_precision
value: 54.43548387096774
- type: euclidean_recall
value: 88.23529411764706
- type: main_score
value: 56.265447472512676
- type: manhattan_accuracy
value: 60.91205211726385
- type: manhattan_accuracy_threshold
value: 877.9421806335449
- type: manhattan_ap
value: 56.265447472512676
- type: manhattan_f1
value: 67.16791979949875
- type: manhattan_f1_threshold
value: 930.9440612792969
- type: manhattan_precision
value: 54.47154471544715
- type: manhattan_recall
value: 87.58169934640523
- type: max_ap
value: 56.265447472512676
- type: max_f1
value: 67.33167082294264
- type: max_precision
value: 54.47154471544715
- type: max_recall
value: 88.23529411764706
- type: similarity_accuracy
value: 59.60912052117264
- type: similarity_accuracy_threshold
value: 81.55557513237
- type: similarity_ap
value: 56.08760299515377
- type: similarity_f1
value: 67.33167082294264
- type: similarity_f1_threshold
value: 78.1450629234314
- type: similarity_precision
value: 54.43548387096774
- type: similarity_recall
value: 88.23529411764706
task:
type: PairClassification
- dataset:
config: default
name: MTEB TelemarketingSalesRuleLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 51.06382978723404
- type: ap
value: 64.12529550827422
- type: ap_weighted
value: 64.12529550827422
- type: f1
value: 48.74348032242769
- type: f1_weighted
value: 46.65516580410197
- type: main_score
value: 51.06382978723404
task:
type: Classification
- dataset:
config: default
name: MTEB TextualismToolDictionariesLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 69.1588785046729
- type: ap
value: 13.91484942886812
- type: ap_weighted
value: 13.91484942886812
- type: f1
value: 53.57001972386588
- type: f1_weighted
value: 75.94757507050821
- type: main_score
value: 69.1588785046729
task:
type: Classification
- dataset:
config: default
name: MTEB TextualismToolPlainLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 52.121212121212125
- type: ap
value: 44.68029172320217
- type: ap_weighted
value: 44.68029172320217
- type: f1
value: 50.48433048433048
- type: f1_weighted
value: 48.79288612621945
- type: main_score
value: 52.121212121212125
task:
type: Classification
- dataset:
config: default
name: MTEB ToxicChatClassification (default)
revision: 3e0319203c7162b9c9f8015b594441f979c199bc
split: test
type: lmsys/toxic-chat
metrics:
- type: accuracy
value: 73.56529209621992
- type: ap
value: 21.641229801673067
- type: ap_weighted
value: 21.641229801673067
- type: f1
value: 60.19489676894062
- type: f1_weighted
value: 77.21280694246968
- type: main_score
value: 73.56529209621992
task:
type: Classification
- dataset:
config: default
name: MTEB ToxicConversationsClassification (default)
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 57.7734375
- type: ap
value: 9.305482173252097
- type: ap_weighted
value: 9.305482173252097
- type: f1
value: 44.43839832998249
- type: f1_weighted
value: 67.10615100631958
- type: main_score
value: 57.7734375
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification (default)
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 55.29994340690435
- type: f1
value: 55.3098112653406
- type: f1_weighted
value: 54.4846442708958
- type: main_score
value: 55.29994340690435
task:
type: Classification
- dataset:
config: default
name: MTEB TweetTopicSingleClassification (default)
revision: 87b7a0d1c402dbb481db649569c556d9aa27ac05
split: test_2021
type: cardiffnlp/tweet_topic_single
metrics:
- type: accuracy
value: 52.522150029533364
- type: f1
value: 40.24714634897976
- type: f1_weighted
value: 57.39523757985323
- type: main_score
value: 52.522150029533364
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering (default)
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: main_score
value: 19.90344454285597
- type: v_measure
value: 19.90344454285597
- type: v_measure_std
value: 1.8260774855268984
task:
type: Clustering
- dataset:
config: default
name: MTEB UCCVCommonLawLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 52.127659574468076
- type: ap
value: 42.829212190914326
- type: ap_weighted
value: 42.829212190914326
- type: f1
value: 50.50895050895051
- type: f1_weighted
value: 51.84200503349441
- type: main_score
value: 52.127659574468076
task:
type: Classification
- dataset:
config: default
name: MTEB UnfairTOSLegalBenchClassification (default)
revision: 12ca3b695563788fead87a982ad1a068284413f4
split: test
type: nguha/legalbench
metrics:
- type: accuracy
value: 19.3359375
- type: f1
value: 11.24236763925133
- type: f1_weighted
value: 27.137659267661597
- type: main_score
value: 19.3359375
task:
type: Classification
- dataset:
config: default
name: MTEB VieMedEVBitextMining (default)
revision: d03c69413bc53d1cea5a5375b3a953c4fee35ecd
split: test
type: nhuvo/MedEV
metrics:
- type: accuracy
value: 8.69140625
- type: f1
value: 7.772120924359041
- type: main_score
value: 7.772120924359041
- type: precision
value: 7.525730353438241
- type: recall
value: 8.69140625
task:
type: BitextMining
- dataset:
config: default
name: MTEB WikiCitiesClustering (default)
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
split: test
type: jinaai/cities_wiki_clustering
metrics:
- type: main_score
value: 56.66855146861069
- type: v_measure
value: 56.66855146861069
- type: v_measure_std
value: 0.0
task:
type: Clustering
- dataset:
config: default
name: MTEB YahooAnswersTopicsClassification (default)
revision: 78fccffa043240c80e17a6b1da724f5a1057e8e5
split: test
type: community-datasets/yahoo_answers_topics
metrics:
- type: accuracy
value: 41.787109375
- type: f1
value: 40.33967050694529
- type: f1_weighted
value: 40.3509380795682
- type: main_score
value: 41.787109375
task:
type: Classification
- dataset:
config: default
name: MTEB YelpReviewFullClassification (default)
revision: c1f9ee939b7d05667af864ee1cb066393154bf85
split: test
type: Yelp/yelp_review_full
metrics:
- type: accuracy
value: 43.5888671875
- type: f1
value: 42.36578282497966
- type: f1_weighted
value: 42.363220099893724
- type: main_score
value: 43.5888671875
task:
type: Classification
---
Быстрая модель BERT для расчетов эмбеддингов предложений на русском языке. Модель основана на [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) - имеет аналогичные размеры контекста (2048), ембеддинга (312) и быстродействие.
## Использование
```Python
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('sergeyzh/rubert-tiny-turbo')
sentences = ["привет мир", "hello world", "здравствуй вселенная"]
embeddings = model.encode(sentences)
print(util.dot_score(embeddings, embeddings))
```
## Метрики
Оценки модели на бенчмарке [encodechka](https://github.com/avidale/encodechka):
| model | CPU | GPU | size | Mean S | Mean S+W | dim |
|:-----------------------------------|----------:|---------:|---------:|----------:|-----------:|-------:|
| [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) | 120.40 | 8.05 | 490 | 0.789 | 0.702 | 768 |
| BAAI/bge-m3 | 523.40 | 22.50 | 2166 | 0.787 | 0.696 | 1024 |
| intfloat/multilingual-e5-large | 506.80 | 30.80 | 2136 | 0.780 | 0.686 | 1024 |
| intfloat/multilingual-e5-base | 130.61 | 14.39 | 1061 | 0.761 | 0.669 | 768 |
| **sergeyzh/rubert-tiny-turbo** | 5.51 | 3.25 | 111 | 0.749 | 0.667 | 312 |
| intfloat/multilingual-e5-small | 40.86 | 12.09 | 449 | 0.742 | 0.645 | 384 |
| cointegrated/rubert-tiny2 | 5.51 | 3.25 | 111 | 0.704 | 0.638 | 312 |
| model | STS | PI | NLI | SA | TI | IA | IC | ICX | NE1 | NE2 |
|:-----------------------------------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|
| [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) | 0.864 | 0.748 | 0.490 | 0.814 | 0.974 | 0.806 | 0.815 | 0.801 | 0.305 | 0.404 |
| BAAI/bge-m3 | 0.864 | 0.749 | 0.510 | 0.819 | 0.973 | 0.792 | 0.809 | 0.783 | 0.240 | 0.422 |
| intfloat/multilingual-e5-large | 0.862 | 0.727 | 0.473 | 0.810 | 0.979 | 0.798 | 0.819 | 0.773 | 0.224 | 0.374 |
| intfloat/multilingual-e5-base | 0.835 | 0.704 | 0.459 | 0.796 | 0.964 | 0.783 | 0.802 | 0.738 | 0.235 | 0.376 |
| **sergeyzh/rubert-tiny-turbo** | 0.828 | 0.722 | 0.476 | 0.787 | 0.955 | 0.757 | 0.780 | 0.685 | 0.305 | 0.373 |
| intfloat/multilingual-e5-small | 0.822 | 0.714 | 0.457 | 0.758 | 0.957 | 0.761 | 0.779 | 0.691 | 0.234 | 0.275 |
| cointegrated/rubert-tiny2 | 0.750 | 0.651 | 0.417 | 0.737 | 0.937 | 0.746 | 0.757 | 0.638 | 0.360 | 0.386 |
Оценки модели на бенчмарке [ruMTEB](https://habr.com/ru/companies/sberdevices/articles/831150/):
|Model Name | Metric | sbert_large_ mt_nlu_ru | sbert_large_ nlu_ru | rubert-tiny2 | rubert-tiny-turbo | multilingual-e5-small | multilingual-e5-base | multilingual-e5-large |
|:----------------------------------|:--------------------|-----------------------:|--------------------:|----------------:|------------------:|----------------------:|---------------------:|----------------------:|
|CEDRClassification | Accuracy | 0.368 | 0.358 | 0.369 | 0.390 | 0.401 | 0.423 | **0.448** |
|GeoreviewClassification | Accuracy | 0.397 | 0.400 | 0.396 | 0.414 | 0.447 | 0.461 | **0.497** |
|GeoreviewClusteringP2P | V-measure | 0.584 | 0.590 | 0.442 | 0.597 | 0.586 | 0.545 | **0.605** |
|HeadlineClassification | Accuracy | 0.772 | **0.793** | 0.742 | 0.686 | 0.732 | 0.757 | 0.758 |
|InappropriatenessClassification | Accuracy | **0.646** | 0.625 | 0.586 | 0.591 | 0.592 | 0.588 | 0.616 |
|KinopoiskClassification | Accuracy | 0.503 | 0.495 | 0.491 | 0.505 | 0.500 | 0.509 | **0.566** |
|RiaNewsRetrieval | NDCG@10 | 0.214 | 0.111 | 0.140 | 0.513 | 0.700 | 0.702 | **0.807** |
|RuBQReranking | MAP@10 | 0.561 | 0.468 | 0.461 | 0.622 | 0.715 | 0.720 | **0.756** |
|RuBQRetrieval | NDCG@10 | 0.298 | 0.124 | 0.109 | 0.517 | 0.685 | 0.696 | **0.741** |
|RuReviewsClassification | Accuracy | 0.589 | 0.583 | 0.570 | 0.607 | 0.612 | 0.630 | **0.653** |
|RuSTSBenchmarkSTS | Pearson correlation | 0.712 | 0.588 | 0.694 | 0.787 | 0.781 | 0.796 | **0.831** |
|RuSciBenchGRNTIClassification | Accuracy | 0.542 | 0.539 | 0.456 | 0.529 | 0.550 | 0.563 | **0.582** |
|RuSciBenchGRNTIClusteringP2P | V-measure | **0.522** | 0.504 | 0.414 | 0.481 | 0.511 | 0.516 | 0.520 |
|RuSciBenchOECDClassification | Accuracy | 0.438 | 0.430 | 0.355 | 0.415 | 0.427 | 0.423 | **0.445** |
|RuSciBenchOECDClusteringP2P | V-measure | **0.473** | 0.464 | 0.381 | 0.411 | 0.443 | 0.448 | 0.450 |
|SensitiveTopicsClassification | Accuracy | **0.285** | 0.280 | 0.220 | 0.244 | 0.228 | 0.234 | 0.257 |
|TERRaClassification | Average Precision | 0.520 | 0.502 | 0.519 | 0.563 | 0.551 | 0.550 | **0.584** |
|Model Name | Metric | sbert_large_ mt_nlu_ru | sbert_large_ nlu_ru | rubert-tiny2 | rubert-tiny-turbo | multilingual-e5-small | multilingual-e5-base | multilingual-e5-large |
|:----------------------------------|:--------------------|-----------------------:|--------------------:|----------------:|------------------:|----------------------:|----------------------:|---------------------:|
|Classification | Accuracy | 0.554 | 0.552 | 0.514 | 0.535 | 0.551 | 0.561 | **0.588** |
|Clustering | V-measure | **0.526** | 0.519 | 0.412 | 0.496 | 0.513 | 0.503 | 0.525 |
|MultiLabelClassification | Accuracy | 0.326 | 0.319 | 0.294 | 0.317 | 0.314 | 0.329 | **0.353** |
|PairClassification | Average Precision | 0.520 | 0.502 | 0.519 | 0.563 | 0.551 | 0.550 | **0.584** |
|Reranking | MAP@10 | 0.561 | 0.468 | 0.461 | 0.622 | 0.715 | 0.720 | **0.756** |
|Retrieval | NDCG@10 | 0.256 | 0.118 | 0.124 | 0.515 | 0.697 | 0.699 | **0.774** |
|STS | Pearson correlation | 0.712 | 0.588 | 0.694 | 0.787 | 0.781 | 0.796 | **0.831** |
|Average | Average | 0.494 | 0.438 | 0.431 | 0.548 | 0.588 | 0.594 | **0.630** |
|
Xenova/all-MiniLM-L6-v2 | Xenova | "2024-10-25T17:45:15Z" | 75,682 | 50 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:quantized:sentence-transformers/all-MiniLM-L6-v2",
"region:us"
] | feature-extraction | "2023-05-02T22:46:15Z" | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: transformers.js
---
https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings like this:
```js
import { pipeline } from '@huggingface/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2');
// Compute sentence embeddings
const sentences = ['This is an example sentence', 'Each sentence is converted'];
const output = await extractor(sentences, { pooling: 'mean', normalize: true });
console.log(output);
// Tensor {
// dims: [ 2, 384 ],
// type: 'float32',
// data: Float32Array(768) [ 0.04592696577310562, 0.07328180968761444, ... ],
// size: 768
// }
```
You can convert this Tensor to a nested JavaScript array using `.tolist()`:
```js
console.log(output.tolist());
// [
// [ 0.04592696577310562, 0.07328180968761444, 0.05400655046105385, ... ],
// [ 0.08188057690858841, 0.10760223120450974, -0.013241755776107311, ... ]
// ]
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
timm/eca_halonext26ts.c1_in1k | timm | "2023-04-26T16:09:44Z" | 75,645 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2103.12731",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T16:09:32Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for eca_halonext26ts.c1_in1k
A HaloNet image classification model (with Efficient channel attention, based on ResNeXt architecture). Trained on ImageNet-1k in `timm` by Ross Wightman.
NOTE: this model did not adhere to any specific paper configuration, it was tuned for reasonable training times and reduced frequency of self-attention blocks.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping).
* Cosine LR schedule with warmup
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOB (with BYOANet attention specific blocks) allows configuration of:
* block / stage layout
* block-type interleaving
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.8
- GMACs: 2.4
- Activations (M): 11.5
- Image size: 256 x 256
- **Papers:**
- Scaling Local Self-Attention for Parameter Efficient Visual Backbones: https://arxiv.org/abs/2103.12731
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eca_halonext26ts.c1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_halonext26ts.c1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eca_halonext26ts.c1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Vaswani2021ScalingLS,
title={Scaling Local Self-Attention for Parameter Efficient Visual Backbones},
author={Ashish Vaswani and Prajit Ramachandran and A. Srinivas and Niki Parmar and Blake A. Hechtman and Jonathon Shlens},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={12889-12899}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
city96/t5-v1_1-xxl-encoder-bf16 | city96 | "2024-04-26T19:16:01Z" | 75,456 | 20 | transformers | [
"transformers",
"safetensors",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-04-26T17:35:59Z" | A single-safetensor version of Google's T5 v1.1 XXL encoder model in bfloat16 precision.
Intended to be used with text to image models such as PixArt.
|
timm/sebotnet33ts_256.a1h_in1k | timm | "2023-04-26T16:12:15Z" | 75,322 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:2101.11605",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-26T16:12:04Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for sebotnet33ts_256.a1h_in1k
A BotNet image classification model (with Squeeze-and-Excitation channel attention, based on ResNet architecture). Trained on ImageNet-1k in `timm` by Ross Wightman.
NOTE: this model did not adhere to any specific paper configuration, it was tuned for reasonable training times and reduced frequency of self-attention blocks.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `A1` recipe
* LAMB optimizer
* Stronger dropout, stochastic depth, and RandAugment than paper `A1` recipe
* Cosine LR schedule with warmup
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOB (with BYOANet attention specific blocks) allows configuration of:
* block / stage layout
* block-type interleaving
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 13.7
- GMACs: 3.9
- Activations (M): 17.5
- Image size: 256 x 256
- **Papers:**
- Bottleneck Transformers for Visual Recognition: https://arxiv.org/abs/2101.11605
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('sebotnet33ts_256.a1h_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'sebotnet33ts_256.a1h_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 1280, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'sebotnet33ts_256.a1h_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Srinivas2021BottleneckTF,
title={Bottleneck Transformers for Visual Recognition},
author={A. Srinivas and Tsung-Yi Lin and Niki Parmar and Jonathon Shlens and P. Abbeel and Ashish Vaswani},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={16514-16524}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
timm/gernet_l.idstcv_in1k | timm | "2024-02-10T23:34:36Z" | 75,252 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2006.14090",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-22T07:15:08Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for gernet_l.idstcv_in1k
A GENet (GPU-Efficient-Networks) image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 31.1
- GMACs: 4.6
- Activations (M): 8.0
- Image size: 256 x 256
- **Papers:**
- Neural Architecture Design for GPU-Efficient Networks: https://arxiv.org/abs/2006.14090
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/idstcv/GPU-Efficient-Networks
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('gernet_l.idstcv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gernet_l.idstcv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 128, 128])
# torch.Size([1, 128, 64, 64])
# torch.Size([1, 192, 32, 32])
# torch.Size([1, 640, 16, 16])
# torch.Size([1, 2560, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gernet_l.idstcv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2560, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@misc{lin2020neural,
title={Neural Architecture Design for GPU-Efficient Networks},
author={Ming Lin and Hesen Chen and Xiuyu Sun and Qi Qian and Hao Li and Rong Jin},
year={2020},
eprint={2006.14090},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
timm/selecsls42b.in1k | timm | "2023-04-25T00:28:59Z" | 75,221 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1907.00837",
"license:cc-by-4.0",
"region:us"
] | image-classification | "2023-04-25T00:28:28Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: cc-by-4.0
datasets:
- imagenet-1k
---
# Model card for selecsls42b.in1k
A SelecSLS image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 32.5
- GMACs: 3.0
- Activations (M): 4.6
- Image size: 224 x 224
- **Papers:**
- XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera: https://arxiv.org/abs/1907.00837
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mehtadushy/SelecSLS-Pytorch
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('selecsls42b.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'selecsls42b.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 288, 28, 28])
# torch.Size([1, 480, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'selecsls42b.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 4, 4) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{XNect_SIGGRAPH2020,
author = {Mehta, Dushyant and Sotnychenko, Oleksandr and Mueller, Franziska and Xu, Weipeng and Elgharib, Mohamed and Fua, Pascal and Seidel, Hans-Peter and Rhodin, Helge and Pons-Moll, Gerard and Theobalt, Christian},
title = {{XNect}: Real-time Multi-Person {3D} Motion Capture with a Single {RGB} Camera},
journal = {ACM Transactions on Graphics},
url = {http://gvv.mpi-inf.mpg.de/projects/XNect/},
numpages = {17},
volume={39},
number={4},
month = July,
year = {2020},
doi={10.1145/3386569.3392410}
}
```
|
nanaaaa/BilingualChildEmo | nanaaaa | "2024-03-18T09:03:26Z" | 75,067 | 7 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"doi:10.57967/hf/1912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-05T11:30:32Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
timm/volo_d1_224.sail_in1k | timm | "2024-02-10T23:44:24Z" | 75,058 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.13112",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-13T05:51:34Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for volo_d1_224.sail_in1k
A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.6
- GMACs: 6.9
- Activations (M): 24.4
- Image size: 224 x 224
- **Papers:**
- VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/sail-sg/volo
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('volo_d1_224.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'volo_d1_224.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{yuan2022volo,
title={Volo: Vision outlooker for visual recognition},
author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}
```
|
timm/gmlp_s16_224.ra3_in1k | timm | "2024-02-10T23:36:17Z" | 74,985 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2105.08050",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-27T23:01:08Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for gmlp_s16_224.ra3_in1k
A gMLP image classification model. Trained on ImageNet-1k in `timm` by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 19.4
- GMACs: 4.4
- Activations (M): 15.1
- Image size: 224 x 224
- **Papers:**
- Pay Attention to MLPs: https://arxiv.org/abs/2105.08050
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('gmlp_s16_224.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'gmlp_s16_224.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 256) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Liu2021PayAT,
title={Pay Attention to MLPs},
author={Hanxiao Liu and Zihang Dai and David R. So and Quoc V. Le},
booktitle={Neural Information Processing Systems},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
vblagoje/bert-english-uncased-finetuned-pos | vblagoje | "2021-05-20T08:51:26Z" | 74,941 | 38 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | Entry not found |
timm/tf_mixnet_l.in1k | timm | "2023-04-27T21:50:04Z" | 74,905 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1907.09595",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:21:35Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tf_mixnet_l.in1k
A MixNet image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 7.3
- GMACs: 0.6
- Activations (M): 10.8
- Image size: 224 x 224
- **Papers:**
- MixConv: Mixed Depthwise Convolutional Kernels: https://arxiv.org/abs/1907.09595
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tf_mixnet_l.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_mixnet_l.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 40, 56, 56])
# torch.Size([1, 56, 28, 28])
# torch.Size([1, 160, 14, 14])
# torch.Size([1, 264, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tf_mixnet_l.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{tan2019mixconv,
title={MixConv: Mixed Depthwise Convolutional Kernels},
author={Mingxing Tan and Quoc V. Le},
year={2019},
eprint={1907.09595},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k | timm | "2024-02-10T23:41:25Z" | 74,862 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1905.00546",
"arxiv:1611.05431",
"arxiv:1512.03385",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | "2023-04-05T19:13:21Z" | ---
license: cc-by-nc-4.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnext101_32x16d.fb_swsl_ig1b_ft_in1k
A ResNeXt-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
* grouped 3x3 bottleneck convolutions
Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 194.0
- GMACs: 36.3
- Activations (M): 51.2
- Image size: 224 x 224
- **Papers:**
- Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnext101_32x16d.fb_swsl_ig1b_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_32x16d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_32x16d.fb_swsl_ig1b_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{yalniz2019billionscale,
title={Billion-scale semi-supervised learning for image classification},
author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan},
year={2019},
eprint={1905.00546},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/vit_base_patch16_224.mae | timm | "2024-02-09T18:01:09Z" | 74,845 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"arxiv:2111.06377",
"arxiv:2010.11929",
"license:cc-by-nc-4.0",
"region:us"
] | image-feature-extraction | "2023-05-09T20:19:39Z" | ---
license: cc-by-nc-4.0
library_name: timm
tags:
- image-feature-extraction
- timm
---
# Model card for vit_base_patch16_224.mae
A Vision Transformer (ViT) image feature model. Pretrained on ImageNet-1k with Self-Supervised Masked Autoencoder (MAE) method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 85.8
- GMACs: 17.6
- Activations (M): 23.9
- Image size: 224 x 224
- **Papers:**
- Masked Autoencoders Are Scalable Vision Learners: https://arxiv.org/abs/2111.06377
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Pretrain Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/mae
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch16_224.mae', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch16_224.mae',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@Article{MaskedAutoencoders2021,
author = {Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and Piotr Doll{'a}r and Ross Girshick},
journal = {arXiv:2111.06377},
title = {Masked Autoencoders Are Scalable Vision Learners},
year = {2021},
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
hyunwoongko/kobart | hyunwoongko | "2022-08-16T20:01:59Z" | 74,816 | 7 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: ko
tags:
- bart
license: mit
---
## KoBART-base-v2
With the addition of chatting data, the model is trained to handle the semantics of sequences longer than KoBART.
```python
from transformers import PreTrainedTokenizerFast, BartModel
tokenizer = PreTrainedTokenizerFast.from_pretrained('hyunwoongko/kobart')
model = BartModel.from_pretrained('hyunwoongko/kobart')
```
### Performance
NSMC
- acc. : 0.901
### hyunwoongko/kobart
- Added bos/eos post processor
- Removed token_type_ids
|
timm/twins_pcpvt_base.in1k | timm | "2023-04-23T23:21:45Z" | 74,624 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.13840",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-23T23:21:13Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for twins_pcpvt_base.in1k
A Twins-PCPVT image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 43.8
- GMACs: 6.7
- Activations (M): 25.2
- Image size: 224 x 224
- **Papers:**
- Twins: Revisiting the Design of Spatial Attention in Vision Transformers: https://arxiv.org/abs/2104.13840
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/Meituan-AutoML/Twins
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('twins_pcpvt_base.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'twins_pcpvt_base.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 49, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{chu2021Twins,
title={Twins: Revisiting the Design of Spatial Attention in Vision Transformers},
author={Xiangxiang Chu and Zhi Tian and Yuqing Wang and Bo Zhang and Haibing Ren and Xiaolin Wei and Huaxia Xia and Chunhua Shen},
booktitle={NeurIPS 2021},
url={https://openreview.net/forum?id=5kTlVBkzSRx},
year={2021}
}
```
|
pszemraj/flan-t5-large-grammar-synthesis | pszemraj | "2024-10-09T04:00:39Z" | 74,623 | 83 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"gguf",
"t5",
"text2text-generation",
"grammar",
"spelling",
"punctuation",
"error-correction",
"grammar synthesis",
"FLAN",
"dataset:jfleg",
"arxiv:2107.06751",
"doi:10.57967/hf/0138",
"license:cc-by-nc-sa-4.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-11-26T02:40:52Z" | ---
languages:
- en
license:
- cc-by-nc-sa-4.0
- apache-2.0
tags:
- grammar
- spelling
- punctuation
- error-correction
- grammar synthesis
- FLAN
datasets:
- jfleg
widget:
- text: "There car broke down so their hitching a ride to they're class."
example_title: "compound-1"
- text: "i can has cheezburger"
example_title: "cheezburger"
- text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
i again tort watfettering an we have estimated the trend an
called wot to be called sthat of exty right now we can and look at
wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
tesees ona effect of them exty"
example_title: "Transcribed Audio Example 2"
- text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
example_title: "incorrect word choice (context)"
- text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
ta ohow to remove trents in these nalitives from time series"
example_title: "lowercased audio transcription output"
- text: "Frustrated, the chairs took me forever to set up."
example_title: "dangling modifier"
- text: "I would like a peice of pie."
example_title: "miss-spelling"
- text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
example_title: "chatbot on Zurich"
- text: "Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents. At this point, Elvthos introduces himself as his native English speaker and goes on to say that if you continue to work on social scnce,"
example_title: "social science ASR summary output"
- text: "they are somewhat nearby right yes please i'm not sure how the innish is tepen thut mayyouselect one that istatte lo variants in their property e ere interested and anyone basical e may be applyind reaching the browing approach were"
example_title: "medical course audio transcription"
parameters:
max_length: 128
min_length: 4
num_beams: 8
repetition_penalty: 1.21
length_penalty: 1
early_stopping: True
---
# grammar-synthesis-large: FLAN-t5
<a href="https://colab.research.google.com/gist/pszemraj/5dc89199a631a9c6cfd7e386011452a0/demo-flan-t5-large-grammar-synthesis.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
A fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset. [Demo](https://huggingface.co/spaces/pszemraj/FLAN-grammar-correction) on HF spaces.
## Example
![example](https://i.imgur.com/PIhrc7E.png)
Compare vs. the original [grammar-synthesis-large](https://huggingface.co/pszemraj/grammar-synthesis-large).
---
## usage in Python
> There's a colab notebook that already has this basic version implemented (_click on the Open in Colab button_)
After `pip install transformers` run the following code:
```python
from transformers import pipeline
corrector = pipeline(
'text2text-generation',
'pszemraj/flan-t5-large-grammar-synthesis',
)
raw_text = 'i can has cheezburger'
results = corrector(raw_text)
print(results)
```
**For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
- it is also helpful to **first** check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like `textattack/roberta-base-CoLA`
- I made a notebook demonstrating batch inference [here](https://colab.research.google.com/gist/pszemraj/6e961b08970f98479511bb1e17cdb4f0/batch-grammar-check-correct-demo.ipynb)
---
## Model description
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
### ONNX Checkpoint
This model has been converted to ONNX and can be loaded/used with huggingface's `optimum` library.
You first need to [install optimum](https://huggingface.co/docs/optimum/installation)
```bash
pip install optimum[onnxruntime]
# ^ if you want to use a different runtime read their docs
```
load with the optimum `pipeline`
```python
from optimum.pipelines import pipeline
corrector = pipeline(
"text2text-generation", model=corrector_model_name, accelerator="ort"
)
# use as normal
```
### Other checkpoints
If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the **[base](https://huggingface.co/pszemraj/grammar-synthesis-base)** and **[small](https://huggingface.co/pszemraj/grammar-synthesis-small)** checkpoints fine-tuned from the relevant t5 checkpoints.
## Limitations
- dataset: `cc-by-nc-sa-4.0`
- model: `apache-2.0`
- this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
## Use Cases
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
- To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
> An example of this model running on CPU with beam search:
```
Original response:
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
synthesizing took 306.12 seconds
Final response in 1294.857 s:
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
```
_Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
---
## Citation info
If you find this fine-tuned model useful in your work, please consider citing it :)
```
@misc {peter_szemraj_2022,
author = { {Peter Szemraj} },
title = { flan-t5-large-grammar-synthesis (Revision d0b5ae2) },
year = 2022,
url = { https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis },
doi = { 10.57967/hf/0138 },
publisher = { Hugging Face }
}
``` |
VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct | VAGOsolutions | "2024-04-29T22:56:24Z" | 74,617 | 22 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"moe",
"conversational",
"en",
"de",
"fr",
"it",
"es",
"dataset:argilla/distilabel-math-preference-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-15T16:01:09Z" | ---
license: apache-2.0
language:
- en
- de
- fr
- it
- es
library_name: transformers
pipeline_tag: text-generation
tags:
- mistral
- finetune
- dpo
- Instruct
- augmentation
- german
- mixtral
- moe
datasets:
- argilla/distilabel-math-preference-dpo
---
![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/02/Sauerkraut_Instruct_MoE_Instruct.png "SauerkrautLM-Mixtral-8x7B")
## VAGO solutions SauerkrautLM-Mixtral-8x7B-Instruct
Introducing **SauerkrautLM-Mixtral-8x7B-Instruct** – our Sauerkraut version of the powerful Mixtral-8x7B-Instruct!
Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Data Contamination Test](#data-contamination-test-results)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Mixtral Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ) |
| SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-AWQ) |
## Model Details
**SauerkrautLM-Mixtral-8x7B-Instruct**
- **Model Type:** SauerkrautLM-Mixtral-8x7B-Instruct-v0.1 is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Language(s):** English, German, French, Italian, Spanish
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:golchinfar@vago-solutions.de)
### Training Dataset:
SauerkrautLM-Mixtral-8x7B-Instruct was trained with mix of German data augmentation and translated data.
Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
## Evaluation
![Harness](https://vago-solutions.de/wp-content/uploads/2023/12/MOE_Instruct.png "SauerkrautLM-Mixtral-8x7B-Instruct Harness")
*evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon
*All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon
**German RAG LLM Evaluation**
corrected result after FIX: https://github.com/huggingface/lighteval/pull/171
```
| Task |Version|Metric|Value| |Stderr|
|------------------------------------------------------|------:|------|----:|---|-----:|
|all | |acc |0.975|± |0.0045|
|community:german_rag_eval:_average:0 | |acc |0.975|± |0.0045|
|community:german_rag_eval:choose_context_by_question:0| 0|acc |0.953|± |0.0067|
|community:german_rag_eval:choose_question_by_context:0| 0|acc |0.998|± |0.0014|
|community:german_rag_eval:context_question_match:0 | 0|acc |0.975|± |0.0049|
|community:german_rag_eval:question_answer_match:0 | 0|acc |0.974|± |0.0050|
```
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:vaziri@vago-solutions.de). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology! |
timm/nest_base_jx.goog_in1k | timm | "2023-04-23T23:11:41Z" | 74,566 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2105.12723",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-23T23:10:38Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for nest_base_jx.goog_in1k
A NesT image classification model. Trained on ImageNet-1k by paper authors in JAX. Ported to PyTorch by Alexander Soare.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 67.7
- GMACs: 18.0
- Activations (M): 53.4
- Image size: 224 x 224
- **Papers:**
- Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding: https://arxiv.org/abs/2105.12723
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/google-research/nested-transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('nest_base_jx.goog_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nest_base_jx.goog_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'nest_base_jx.goog_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 14, 14) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{zhang2021aggregating,
title={Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding},
author={Zizhao Zhang and Han Zhang and Long Zhao and Ting Chen and and Sercan Ö. Arık and Tomas Pfister},
booktitle={AAAI Conference on Artificial Intelligence (AAAI)},
year={2022}
}
```
|
kha-white/manga-ocr-base | kha-white | "2022-06-22T15:34:05Z" | 74,530 | 122 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"ja",
"dataset:manga109s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-text | "2022-03-02T23:29:05Z" | ---
language: ja
tags:
- image-to-text
license: apache-2.0
datasets:
- manga109s
---
# Manga OCR
Optical character recognition for Japanese text, with the main focus being Japanese manga.
It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) framework.
Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality
text recognition, robust against various scenarios specific to manga:
- both vertical and horizontal text
- text with furigana
- text overlaid on images
- wide variety of fonts and font styles
- low quality images
Code is available [here](https://github.com/kha-white/manga_ocr).
|
Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4 | Qwen | "2024-10-18T02:59:11Z" | 74,515 | 9 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-17T12:51:51Z" | ---
base_model: Qwen/Qwen2.5-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2.5-7B-Instruct-GPTQ-Int4
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the GPTQ-quantized 4-bit instruction-tuned 7B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
- Quantization: GPTQ 4-bit
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
Also check out our [GPTQ documentation](https://qwen.readthedocs.io/en/latest/quantization/gptq.html) for more usage guide.
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For quantized models, the benchmark results against the original bfloat16 models can be found [here](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html)
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
llava-hf/llama3-llava-next-8b-hf | llava-hf | "2024-08-16T06:08:00Z" | 74,405 | 27 | transformers | [
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"vision",
"conversational",
"en",
"license:llama3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-07-19T05:41:53Z" | ---
license: llama3
tags:
- vision
- image-text-to-text
language:
- en
pipeline_tag: image-text-to-text
---
# LLaVa-Next Model Card
The LLaVA-NeXT model was proposed in [LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild
](https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/) by Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, Chunyuan Li.
These LLaVa-NeXT series improves upon [LLaVa-1.6](https://llava-vl.github.io/blog/2024-01-30-llava-next/) by training with stringer language backbones, improving the
performance.
Disclaimer: The team releasing LLaVa-NeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LLaVa combines a pre-trained large language model with a pre-trained vision encoder for multimodal chatbot use cases. LLaVA NeXT Llama3 improves on LLaVA 1.6 BY:
- More diverse and high quality data mixture
- Better and bigger language backbone
Base LLM: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png)
## Intended uses & limitations
You can use the raw model for tasks like image captioning, visual question answering, multimodal chatbot use cases. See the [model hub](https://huggingface.co/models?search=llava-hf) to look for
other versions on a task that interests you.
### How to use
You can load and use the model like following:
```python
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
import torch
from PIL import Image
import requests
processor = LlavaNextProcessor.from_pretrained("llava-hf/llama3-llava-next-8b-hf")
model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llama3-llava-next-8b-hf", torch_dtype=torch.float16, device_map="auto")
# prepare image and text prompt, using the appropriate prompt template
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
# Define a chat histiry and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "What is shown in this image?"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device)
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
```
### Model optimization
#### 4-bit quantization through `bitsandbytes` library
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```diff
model = LlavaNextForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ load_in_4bit=True
)
```
#### Use Flash-Attention 2 to further speed-up generation
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```diff
model = LlavaNextForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ use_flash_attention_2=True
).to(0)
```
### Training Data
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
### BibTeX entry and citation info
```bibtex
@misc{li2024llavanext-strong,
title={LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild},
url={https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/},
author={Li, Bo and Zhang, Kaichen and Zhang, Hao and Guo, Dong and Zhang, Renrui and Li, Feng and Zhang, Yuanhan and Liu, Ziwei and Li, Chunyuan},
month={May},
year={2024}
}
``` |
qgallouedec/tiny-Qwen2ForCausalLM | qgallouedec | "2024-11-11T00:24:24Z" | 74,294 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-26T06:52:37Z" | ---
library_name: transformers
tags:
- trl
---
# Tiny Qwen2ForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
allenai/Molmo-7B-D-0924 | allenai | "2024-10-10T23:20:03Z" | 74,221 | 428 | transformers | [
"transformers",
"safetensors",
"molmo",
"text-generation",
"multimodal",
"olmo",
"pixmo",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2409.17146",
"base_model:Qwen/Qwen2-7B",
"base_model:finetune:Qwen/Qwen2-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | image-text-to-text | "2024-09-25T01:48:22Z" | ---
license: apache-2.0
language:
- en
base_model:
- openai/clip-vit-large-patch14-336
- Qwen/Qwen2-7B
pipeline_tag: image-text-to-text
tags:
- multimodal
- olmo
- molmo
- pixmo
library_name: transformers
---
<img src="molmo_logo.png" alt="Logo for the Molmo Project" style="width: auto; height: 50px;">
# Molmo 7B-D
Molmo is a family of open vision-language models developed by the Allen Institute for AI. Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs. It has state-of-the-art performance among multimodal models with a similar size while being fully open-source. You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
**Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
It powers the **Molmo demo at** [**molmo.allenai.org**](https://molmo.allenai.org).
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
[**Sign up here**](https://docs.google.com/forms/d/e/1FAIpQLSdML1MhNNBDsCHpgWG65Oydg2SjZzVasyqlP08nBrWjZp_c7A/viewform) to be the first to know when artifacts are released.
Quick links:
- 💬 [Demo](https://molmo.allenai.org/)
- 📂 [All Models](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19)
- 📃 [Paper](https://molmo.allenai.org/paper.pdf)
- 🎥 [Blog with Videos](https://molmo.allenai.org/blog)
## Quick Start
To run Molmo, first install dependencies:
```bash
pip install einops torchvision
```
Then, follow these steps:
```python
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# load the processor
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# load the model
model = AutoModelForCausalLM.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="Describe this image."
)
# move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# generate output; maximum 200 new tokens; stop generation when <|endoftext|> is generated
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# only get generated tokens; decode them to text
generated_tokens = output[0,inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
# print the generated text
print(generated_text)
# >>> This image features an adorable black Labrador puppy, captured from a top-down
# perspective. The puppy is sitting on a wooden deck, which is composed ...
```
To make inference more efficient, run with autocast:
```python
with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
We did most of our evaluation in this setting (autocast on, but float32 weights)
To even further reduce the memory requirements, the model can be run with bfloat16 weights:
```python
model.to(dtype=torch.bfloat16)
inputs["images"] = inputs["images"].to(torch.bfloat16)
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
```
Note that we have observed that this can change the output of the model compared to running with float32 weights.
## Evaluations
| Model | Average Score on 11 Academic Benchmarks | Human Preference Elo Rating |
|-----------------------------|-----------------------------------------|-----------------------------|
| Molmo 72B | 81.2 | 1077 |
| **Molmo 7B-D (this model)** | **77.3** | **1056** |
| Molmo 7B-O | 74.6 | 1051 |
| MolmoE 1B | 68.6 | 1032 |
| GPT-4o | 78.5 | 1079 |
| GPT-4V | 71.1 | 1041 |
| Gemini 1.5 Pro | 78.3 | 1074 |
| Gemini 1.5 Flash | 75.1 | 1054 |
| Claude 3.5 Sonnet | 76.7 | 1069 |
| Claude 3 Opus | 66.4 | 971 |
| Claude 3 Haiku | 65.3 | 999 |
| Qwen VL2 72B | 79.4 | 1037 |
| Qwen VL2 7B | 73.7 | 1025 |
| Intern VL2 LLAMA 76B | 77.1 | 1018 |
| Intern VL2 8B | 69.4 | 953 |
| Pixtral 12B | 69.5 | 1016 |
| Phi3.5-Vision 4B | 59.7 | 982 |
| PaliGemma 3B | 50.0 | 937 |
| LLAVA OneVision 72B | 76.6 | 1051 |
| LLAVA OneVision 7B | 72.0 | 1024 |
| Cambrian-1 34B | 66.8 | 953 |
| Cambrian-1 8B | 63.4 | 952 |
| xGen - MM - Interleave 4B | 59.5 | 979 |
| LLAVA-1.5 13B | 43.9 | 960 |
| LLAVA-1.5 7B | 40.7 | 951 |
*Benchmarks: AI2D test, ChartQA test, VQA v2.0 test, DocQA test, InfographicVQA test, TextVQA val, RealWorldQA, MMMU val, MathVista testmini, CountBenchQA, Flickr Count (we collected this new dataset that is significantly harder than CountBenchQA).*
## FAQs
### I'm getting an error a broadcast error when processing images!
Your image might not be in RGB format. You can convert it using the following code snippet:
```python
from PIL import Image
image = Image.open(...)
if image.mode != "RGB":
image = image.convert("RGB")
```
### Molmo doesn't work great with transparent images!
We received reports that Molmo models might struggle with transparent images.
For the time being, we recommend adding a white or dark background to your images before passing them to the model. The code snippet below shows how to do this using the Python Imaging Library (PIL):
```python
# Load the image
url = "..."
image = Image.open(requests.get(url, stream=True).raw)
# Convert the image to grayscale to calculate brightness
gray_image = image.convert('L') # Convert to grayscale
# Calculate the average brightness
stat = ImageStat.Stat(gray_image)
average_brightness = stat.mean[0] # Get the average value
# Define background color based on brightness (threshold can be adjusted)
bg_color = (0, 0, 0) if average_brightness > 127 else (255, 255, 255)
# Create a new image with the same size as the original, filled with the background color
new_image = Image.new('RGB', image.size, bg_color)
# Paste the original image on top of the background (use image as a mask if needed)
new_image.paste(image, (0, 0), image if image.mode == 'RGBA' else None)
# Now you can pass the new_image to Molmo
processor = AutoProcessor.from_pretrained(
'allenai/Molmo-7B-D-0924',
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
```
## License and Use
This model is licensed under Apache 2.0. It is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use). |
facebook/mask2former-swin-large-ade-semantic | facebook | "2023-09-11T20:35:29Z" | 74,113 | 11 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-01-05T12:25:00Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png)
## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-ade-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
timm/xcit_large_24_p8_224.fb_in1k | timm | "2024-02-10T23:43:18Z" | 74,107 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-13T01:59:56Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_large_24_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 188.9
- GMACs: 141.2
- Activations (M): 181.6
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_large_24_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_large_24_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
openbmb/MiniCPM-V-2_6-int4 | openbmb | "2024-09-25T03:24:21Z" | 74,054 | 64 | transformers | [
"transformers",
"safetensors",
"minicpmv",
"feature-extraction",
"minicpm-v",
"vision",
"ocr",
"multi-image",
"video",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:openbmb/RLAIF-V-Dataset",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | "2024-08-04T05:55:10Z" | ---
pipeline_tag: image-text-to-text
datasets:
- openbmb/RLAIF-V-Dataset
library_name: transformers
language:
- multilingual
tags:
- minicpm-v
- vision
- ocr
- multi-image
- video
- custom_code
---
## MiniCPM-V 2.6 int4
This is the int4 quantized version of [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6).
Running with int4 version would use lower GPU memory (about 7GB).
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10:
```
Pillow==10.1.0
torch==2.1.2
torchvision==0.16.2
transformers==4.40.0
sentencepiece==0.1.99
accelerate==0.30.1
bitsandbytes==0.43.1
```
```python
# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6-int4', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6-int4', trust_remote_code=True)
model.eval()
image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': [image, question]}]
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
## if you want to use streaming, please make sure sampling=True and stream=True
## the model.chat will return a generator
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
temperature=0.7,
stream=True
)
generated_text = ""
for new_text in res:
generated_text += new_text
print(new_text, flush=True, end='')
```
|
google/siglip-large-patch16-256 | google | "2024-09-26T08:21:54Z" | 73,956 | 9 | transformers | [
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"vision",
"arxiv:2303.15343",
"arxiv:2209.06794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | "2024-01-08T12:48:50Z" | ---
license: apache-2.0
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# SigLIP (large-sized model)
SigLIP model pre-trained on WebLi at resolution 256x256. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in [this repository](https://github.com/google-research/big_vision).
Disclaimer: The team releasing SigLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
## Intended uses & limitations
You can use the raw model for tasks like zero-shot image classification and image-text retrieval. See the [model hub](https://huggingface.co/models?search=google/siglip) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot image classification:
```python
from PIL import Image
import requests
from transformers import AutoProcessor, AutoModel
import torch
model = AutoModel.from_pretrained("google/siglip-base-patch16-256")
processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-256")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["a photo of 2 cats", "a photo of 2 dogs"]
inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image) # these are the probabilities
print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'")
```
Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-256")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"])
outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
print(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/siglip.html#).
## Training procedure
### Training data
SigLIP is pre-trained on the English image-text pairs of the WebLI dataset [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794).
### Preprocessing
Images are resized/rescaled to the same resolution (256x256) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
Texts are tokenized and padded to the same length (64 tokens).
### Compute
The model was trained on 16 TPU-v4 chips for three days.
## Evaluation results
Evaluation of SigLIP compared to CLIP is shown below (taken from the paper).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg"
alt="drawing" width="600"/>
### BibTeX entry and citation info
```bibtex
@misc{zhai2023sigmoid,
title={Sigmoid Loss for Language Image Pre-Training},
author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer},
year={2023},
eprint={2303.15343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
unsloth/Phi-3.5-mini-instruct-bnb-4bit | unsloth | "2024-08-20T23:52:52Z" | 73,719 | 10 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"phi3",
"phi",
"conversational",
"multilingual",
"arxiv:2404.14219",
"arxiv:2407.13833",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-08-20T23:29:36Z" | ---
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
language:
- multilingual
library_name: transformers
license: mit
tags:
- unsloth
- transformers
- phi3
- phi
---
# Finetune Phi-3.5, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Phi-3.5 (mini) here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to Microsoft AI and Phi team for creating and releasing these models.
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.1-Mini-128K-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 | |
KoboldAI/LLaMA2-13B-Tiefighter-GGUF | KoboldAI | "2023-10-19T16:59:39Z" | 73,672 | 83 | null | [
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2023-10-18T22:47:57Z" | ---
license: llama2
---
This is the GGUF version of the model meant for use in [KoboldCpp](https://koboldai.org/cpp), check the [Float16](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) version for the original.
# LLaMA2-13B-Tiefighter
Tiefighter is a merged model achieved trough merging two different lora's on top of a well established existing merge.
To achieve this the following recipe was used:
* We begin with the base model Undi95/Xwin-MLewd-13B-V0.2 which is a well established merged, contrary to the name this model does not have a strong NSFW bias.
* Then we applied the PocketDoc/Dans-RetroRodeo-13b lora which is a finetune on the Choose your own Adventure datasets from our Skein model.
* After applying this lora we merged the new model with PocketDoc/Dans-RetroRodeo-13b at 5% to weaken the newly introduced adventure bias.
* The resulting merge was used as a new basemodel to which we applied Blackroot/Llama-2-13B-Storywriter-LORA and repeated the same trick, this time at 10%.
This means this model contains the following ingredients from their upstream models for as far as we can track them:
- Undi95/Xwin-MLewd-13B-V0.2
- - Undi95/ReMM-S-Light
- Undi95/CreativeEngine
- Brouz/Slerpeno
- - elinas/chronos-13b-v2
- jondurbin/airoboros-l2-13b-2.1
- NousResearch/Nous-Hermes-Llama2-13b+nRuaif/Kimiko-v2
- CalderaAI/13B-Legerdemain-L2+lemonilia/limarp-llama2-v2
- - KoboldAI/LLAMA2-13B-Holodeck-1
- NousResearch/Nous-Hermes-13b
- OpenAssistant/llama2-13b-orca-8k-3319
- ehartford/WizardLM-1.0-Uncensored-Llama2-13b
- Henk717/spring-dragon
- The-Face-Of-Goonery/Huginn-v3-13b (Contains undisclosed model versions, those we assumed where possible)
- - SuperCOT (Undisclosed version)
- elinas/chronos-13b-v2 (Version assumed)
- NousResearch/Nous-Hermes-Llama2-13b
- stabilityai/StableBeluga-13B (Version assumed)
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/Storytelling-v1-13B-lora
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- - "a lot of different models, like hermes, beluga, airoboros, chronos.. limarp"
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
- PocketDoc/Dans-RetroRodeo-13b
- Blackroot/Llama-2-13B-Storywriter-LORA
While we could possibly not credit every single lora or model involved in this merged model, we'd like to thank all involved creators upstream for making this awesome model possible!
Thanks to you the AI ecosystem is thriving, and without your dedicated tuning efforts models such as this one would not be possible.
# Usage
This model is meant to be creative, If you let it improvise you get better results than if you drown it in details.
## Story Writing
Regular story writing in the traditional way is supported, simply copy paste your story and continue writing. Optionally use an instruction in memory or an authors note to guide the direction of your story.
### Generate a story on demand
To generate stories on demand you can use an instruction (tested in the Alpaca format) such as "Write a novel about X, use chapters and dialogue" this will generate a story. The format can vary between generations depending on how the model chooses to begin, either write what you want as shown in the earlier example or write the beginning of the story yourself so the model can follow your style. A few retries can also help if the model gets it wrong.
## Chatbots and persona's
This model has been tested with various forms of chatting, testers have found that typically less is more and the model is good at improvising. Don't drown the model in paragraphs of detailed information, instead keep it simple first and see how far you can lean on the models own ability to figure out your character. Copy pasting paragraphs of background information is not suitable for a 13B model such as this one, code formatted characters or an instruction prompt describing who you wish to talk to goes much further.
For example, you can put this in memory in regular chat mode:
```
### Instruction:
Generate a conversation between Alice and Henk where they discuss language models.
In this conversation Henk is excited to teach Alice about Tiefigther.
### Response:
```
Because the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.
## Instruct Prompting
This model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.
During instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias.
Keep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.
## Adventuring and Adventure Games
This model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode).
It is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.
## Discovered something cool and want to engage with us?
Join our community at https://koboldai.org/discord ! |