modelId
stringlengths
4
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
392M
likes
int64
0
6.56k
library_name
stringclasses
368 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
1M
Qwen/Qwen1.5-0.5B
Qwen
"2024-04-05T10:38:41Z"
391,680
143
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "arxiv:2309.16609", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-22T16:30:10Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Qwen1.5-0.5B ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in Chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
google/paligemma-3b-mix-224
google
"2024-07-19T12:09:50Z"
390,935
60
transformers
[ "transformers", "safetensors", "paligemma", "image-text-to-text", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:2403.08295", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:2205.12522", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:2010.04295", "arxiv:2401.06209", "arxiv:2305.10355", "arxiv:2203.10244", "arxiv:1810.12440", "arxiv:1905.13648", "arxiv:1608.00272", "arxiv:1908.04913", "arxiv:2407.07726", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-05-12T23:03:44Z"
--- library_name: transformers license: gemma pipeline_tag: image-text-to-text extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 3B weights, fine-tuned with 224*224 input images and 256 token input/output text sequences on a mixture of downstream academic datasets. The models are available in float32, bfloat16 and float16 format for research purposes only. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-mix-224) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## How to Use PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case. You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. To see model [google/paligemma-3b-mix-448](https://huggingface.co/google/paligemma-3b-mix-448) in action, check [this Space that uses the Transformers codebase](https://huggingface.co/spaces/big-vision/paligemma-hf). Please, refer to the [usage and limitations section](#usage-and-limitations) for intended use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for additional details and examples. ## Use in Transformers The following snippets use model `google/paligemma-3b-mix-224` for reference purposes. The model in this repo you are now browsing may have been trained for other tasks, please make sure you use appropriate inputs for the task at hand. ### Running the default precision (`float32`) on CPU ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt") input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` Output: `Un auto azul estacionado frente a un edificio.` ### Running other precisions on CUDA For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`, so you can use them to reduce the download size and avoid casting on your local computer. This is how you'd run `bfloat16` on an nvidia CUDA card. ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, torch_dtype=dtype, device_map=device, revision="bfloat16", ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Loading in 4-bit / 8-bit You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision: ``` pip install bitsandbytes accelerate ``` ``` from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, quantization_config=quantization_config ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input. ## Citation ```bibtex @article{beyer2024paligemma, title={{PaliGemma: A versatile 3B VLM for transfer}}, author={Lucas Beyer* and Andreas Steiner* and André Susano Pinto* and Alexander Kolesnikov* and Xiao Wang* and Daniel Salz and Maxim Neumann and Ibrahim Alabdulmohsin and Michael Tschannen and Emanuele Bugliarello and Thomas Unterthiner and Daniel Keysers and Skanda Koppula and Fangyu Liu and Adam Grycner and Alexey Gritsenko and Neil Houlsby and Manoj Kumar and Keran Rong and Julian Eisenschlos and Rishabh Kabra and Matthias Bauer and Matko Bošnjak and Xi Chen and Matthias Minderer and Paul Voigtlaender and Ioana Bica and Ivana Balazevic and Joan Puigcerver and Pinelopi Papalampidi and Olivier Henaff and Xi Xiong and Radu Soricut and Jeremiah Harmsen and Xiaohua Zhai*}, year={2024}, journal={arXiv preprint arXiv:2407.07726} } ``` Find the paper [here](https://arxiv.org/abs/2407.07726).
stabilityai/stable-diffusion-xl-refiner-1.0
stabilityai
"2023-09-25T13:42:56Z"
390,857
1,712
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "image-to-image", "arxiv:2307.01952", "arxiv:2211.01324", "arxiv:2108.01073", "arxiv:2112.10752", "license:openrail++", "diffusers:StableDiffusionXLImg2ImgPipeline", "region:us" ]
image-to-image
"2023-07-26T07:38:01Z"
--- license: openrail++ tags: - stable-diffusion - image-to-image --- # SD-XL 1.0-refiner Model Card ![row01](01.png) ## Model ![pipeline](pipeline.png) [SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion: In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents, which are then further processed with a refinement model specialized for the final denoising steps. Note that the base model can be used as a standalone module. Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations. Source code is available at https://github.com/Stability-AI/generative-models . ### Model Description - **Developed by:** Stability AI - **Model type:** Diffusion-based text-to-image generative model - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/LICENSE.md) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)). - **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952). ### Model Sources For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time. [Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference. - **Repository:** https://github.com/Stability-AI/generative-models - **Demo:** https://clipdrop.co/stable-diffusion ## Evaluation ![comparison](comparison.png) The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. ### 🧨 Diffusers Make sure to upgrade diffusers to >= 0.18.0: ``` pip install diffusers --upgrade ``` In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` Yon can then use the refiner to improve images. ```py import torch from diffusers import StableDiffusionXLImg2ImgPipeline from diffusers.utils import load_image pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe = pipe.to("cuda") url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" init_image = load_image(url).convert("RGB") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt, image=init_image).images ``` When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline: ```py pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload` instead of `.to("cuda")`: ```diff - pipe.to("cuda") + pipe.enable_model_cpu_offload() ``` For more advanced use cases, please have a look at [the docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl). ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
RunDiffusion/Juggernaut-XL-v6
RunDiffusion
"2024-03-11T20:08:41Z"
389,938
3
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-22T00:14:34Z"
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v6 + RunDiffusion Photo v1 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [juggernaut@rundiffusion.com](mailto:juggernaut@rundiffusion.com) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) A big thanks for Version 6 goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) ) For business inquires, commercial licensing, custom models, and consultation contact me under juggernaut@rundiffusion.com
lmsys/vicuna-7b-v1.5
lmsys
"2024-03-13T02:01:41Z"
389,100
305
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2307.09288", "arxiv:2306.05685", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-29T04:42:33Z"
--- inference: false license: llama2 --- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288) ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api ## Training Details Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning. The training data is around 125K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation ![Evaluation Results](https://github.com/lm-sys/lm-sys.github.io/blob/main/public/images/webdata/vicuna_v1.5_eval.png?raw=true) Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
philschmid/bart-large-cnn-samsum
philschmid
"2022-12-23T19:48:57Z"
388,954
250
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "sagemaker", "summarization", "en", "dataset:samsum", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: en license: mit tags: - sagemaker - bart - summarization datasets: - samsum widget: - text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\ Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\ \ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\ \ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n" model-index: - name: bart-large-cnn-samsum results: - task: type: summarization name: Summarization dataset: name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization' type: samsum metrics: - type: rogue-1 value: 42.621 name: Validation ROGUE-1 - type: rogue-2 value: 21.9825 name: Validation ROGUE-2 - type: rogue-l value: 33.034 name: Validation ROGUE-L - type: rogue-1 value: 41.3174 name: Test ROGUE-1 - type: rogue-2 value: 20.8716 name: Test ROGUE-2 - type: rogue-l value: 32.1337 name: Test ROGUE-L - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 41.3282 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTYzNzZkZDUzOWQzNGYxYTJhNGE4YWYyZjA0NzMyOWUzMDNhMmVhYzY1YTM0ZTJhYjliNGE4MDZhMjhhYjRkYSIsInZlcnNpb24iOjF9.OOM6l3v5rJCndmUIJV-2SDh2NjbPo5IgQOSL-Ju1Gwbi1voL5amsDEDOelaqlUBE3n55KkUsMLZhyn66yWxZBQ - type: rouge value: 20.8755 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWZiODFiYWQzY2NmOTc5YjA3NTI0YzQ1MzQ0ODk2NjgyMmVlMjA5MjZiNTJkMGRmZGEzN2M3MDNkMjkxMDVhYSIsInZlcnNpb24iOjF9.b8cPk2-IL24La3Vd0hhtii4tRXujh5urAwy6IVeTWHwYfXaURyC2CcQOWtlOx5bdO5KACeaJFrFBCGgjk-VGCQ - type: rouge value: 32.1353 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWNmYzdiYWQ2ZWRkYzRiMGMxNWUwODgwZTdkY2NjZTc1NWE5NTFiMzU0OTU1N2JjN2ExYWQ2NGZkNjk5OTc4YSIsInZlcnNpb24iOjF9.Fzv4p-TEVicljiCqsBJHK1GsnE_AwGqamVmxTPI0WBNSIhZEhliRGmIL_z1pDq6WOzv3GN2YUGvhowU7GxnyAQ - type: rouge value: 38.401 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGI4MWY0NWMxMmQ0ODQ5MDhiNDczMDAzYzJkODBiMzgzYWNkMWM2YTZkZDJmNWJiOGQ3MmNjMGViN2UzYWI2ZSIsInZlcnNpb24iOjF9.7lw3h5k5lJ7tYFLZGUtLyDabFYd00l6ByhmvkW4fykocBy9Blyin4tdw4Xps4DW-pmrdMLgidHxBWz5MrSx1Bw - type: loss value: 1.4297215938568115 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzI0ZWNhNDM5YTViZDMyZGJjMDA1ZWFjYzNhOTdlOTFiNzhhMDBjNmM2MjA3ZmRkZjJjMjEyMGY3MzcwOTI2NyIsInZlcnNpb24iOjF9.oNaZsAtUDqGAqoZWJavlcW7PKx1AWsnkbhaQxadpOKk_u7ywJJabvTtzyx_DwEgZslgDETCf4MM-JKitZKjiDA - type: gen_len value: 60.0757 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTgwYWYwMDRkNTJkMDM5N2I2MWNmYzQ3OWM1NDJmODUyZGViMGE4ZTdkNmIwYWM2N2VjZDNmN2RiMDE4YTYyYiIsInZlcnNpb24iOjF9.PbXTcNYX_SW-BuRQEcqyc21M7uKrOMbffQSAK6k2GLzTVRrzZxsDC57ktKL68zRY8fSiRGsnknOwv-nAR6YBCQ --- ## `bart-large-cnn-samsum` > If you want to use the model you should try a newer fine-tuned FLAN-T5 version [philschmid/flan-t5-base-samsum](https://huggingface.co/philschmid/flan-t5-base-samsum) out socring the BART version with `+6` on `ROGUE1` achieving `47.24`. # TRY [philschmid/flan-t5-base-samsum](https://huggingface.co/philschmid/flan-t5-base-samsum) This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) ## Hyperparameters ```json { "dataset_name": "samsum", "do_eval": true, "do_predict": true, "do_train": true, "fp16": true, "learning_rate": 5e-05, "model_name_or_path": "facebook/bart-large-cnn", "num_train_epochs": 3, "output_dir": "/opt/ml/model", "per_device_eval_batch_size": 4, "per_device_train_batch_size": 4, "predict_with_generate": true, "seed": 7 } ``` ## Usage ```python from transformers import pipeline summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum") conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok. Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ''' summarizer(conversation) ``` ## Results | key | value | | --- | ----- | | eval_rouge1 | 42.621 | | eval_rouge2 | 21.9825 | | eval_rougeL | 33.034 | | eval_rougeLsum | 39.6783 | | test_rouge1 | 41.3174 | | test_rouge2 | 20.8716 | | test_rougeL | 32.1337 | | test_rougeLsum | 38.4149 |
mistralai/Mistral-7B-v0.1
mistralai
"2024-07-24T14:04:08Z"
388,796
3,439
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "pretrained", "en", "arxiv:2310.06825", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-20T13:03:50Z"
--- language: - en license: apache-2.0 tags: - pretrained pipeline_tag: text-generation inference: parameters: temperature: 0.7 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` KeyError: 'mistral' ``` - Or: ``` NotImplementedError: Cannot copy out of meta tensor; no data! ``` Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
vikp/surya_layout3
vikp
"2024-07-12T15:32:03Z"
384,125
1
transformers
[ "transformers", "safetensors", "efficientvit", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-07-09T18:30:07Z"
--- library_name: transformers license: cc-by-nc-sa-4.0 --- Layout model for [surya](https://www.github.com/VikParuchuri/surya).
timm/efficientnet_b0.ra_in1k
timm
"2023-04-27T21:09:50Z"
383,198
3
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-12T23:52:52Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnet_b0.ra_in1k A EfficientNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.3 - GMACs: 0.4 - Activations (M): 6.7 - Image size: 224 x 224 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnet_b0.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_b0.ra_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 24, 56, 56]) # torch.Size([1, 40, 28, 28]) # torch.Size([1, 112, 14, 14]) # torch.Size([1, 320, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_b0.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
sentence-transformers/distilbert-base-nli-mean-tokens
sentence-transformers
"2024-11-05T16:42:12Z"
383,066
5
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "onnx", "safetensors", "openvino", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: feature-extraction --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/distilbert-base-nli-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distilbert-base-nli-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-base-nli-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/distilbert-base-nli-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-base-nli-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
TahaDouaji/detr-doc-table-detection
TahaDouaji
"2024-08-27T12:39:30Z"
382,609
50
transformers
[ "transformers", "pytorch", "safetensors", "detr", "object-detection", "arxiv:2005.12872", "arxiv:1910.09700", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2022-03-11T15:55:14Z"
--- tags: - object-detection license: apache-2.0 base_model: facebook/detr-resnet-50 --- # Model Card for detr-doc-table-detection # Model Details detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50). - **Developed by:** Taha Douaji - **Shared by [Optional]:** Taha Douaji - **Model type:** Object Detection - **Language(s) (NLP):** More information needed - **License:** More information needed - **Parent Model:** [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) - **Resources for more information:** - [Model Demo Space](https://huggingface.co/spaces/trevbeers/pdf-table-extraction) - [Associated Paper](https://arxiv.org/abs/2005.12872) # Uses ## Direct Use This model can be used for the task of object detection. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model was trained on ICDAR2019 Table Dataset # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). # Citation **BibTeX:** ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` # Model Card Authors [optional] Taha Douaji in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests image = Image.open("IMAGE_PATH") processor = DetrImageProcessor.from_pretrained("TahaDouaji/detr-doc-table-detection") model = DetrForObjectDetection.from_pretrained("TahaDouaji/detr-doc-table-detection") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ```
timm/tf_mobilenetv3_small_minimal_100.in1k
timm
"2023-04-27T22:49:57Z"
381,087
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.02244", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-16T05:39:34Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_mobilenetv3_small_minimal_100.in1k A MobileNet-v3 image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 2.0 - GMACs: 0.1 - Activations (M): 1.4 - Image size: 224 x 224 - **Papers:** - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_mobilenetv3_small_minimal_100.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_mobilenetv3_small_minimal_100.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 16, 56, 56]) # torch.Size([1, 24, 28, 28]) # torch.Size([1, 48, 14, 14]) # torch.Size([1, 576, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_mobilenetv3_small_minimal_100.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 576, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{howard2019searching, title={Searching for mobilenetv3}, author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others}, booktitle={Proceedings of the IEEE/CVF international conference on computer vision}, pages={1314--1324}, year={2019} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
neulab/codebert-python
neulab
"2023-02-27T20:56:57Z"
379,461
24
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2302.05527", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-09-23T15:01:36Z"
This is a `microsoft/codebert-base-mlm` model, trained for 1,000,000 steps (with `batch_size=32`) on **Python** code from the `codeparrot/github-code-clean` dataset, on the masked-language-modeling task. It is intended to be used in CodeBERTScore: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score), but can be used for any other model or task. For more information, see: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score) ## Citation If you use this model for research, please cite: ``` @article{zhou2023codebertscore, url = {https://arxiv.org/abs/2302.05527}, author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham}, title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code}, publisher = {arXiv}, year = {2023}, } ```
smallcloudai/Refact-1_6B-fim
smallcloudai
"2023-11-09T07:09:31Z"
378,868
129
transformers
[ "transformers", "pytorch", "safetensors", "gpt_refact", "text-generation", "code", "custom_code", "en", "dataset:bigcode/the-stack-dedup", "dataset:rombodawg/2XUNCENSORED_MegaCodeTraining188k", "dataset:bigcode/commitpackft", "arxiv:2108.12409", "arxiv:1607.06450", "arxiv:1910.07467", "arxiv:1911.02150", "license:bigscience-openrail-m", "model-index", "autotrain_compatible", "region:us" ]
text-generation
"2023-08-29T15:48:36Z"
--- pipeline_tag: text-generation inference: true widget: - text: 'def print_hello_world():' example_title: Hello world group: Python license: bigscience-openrail-m pretrain-datasets: - books - arxiv - c4 - falcon-refinedweb - wiki - github-issues - stack_markdown - self-made dataset of permissive github code datasets: - bigcode/the-stack-dedup - rombodawg/2XUNCENSORED_MegaCodeTraining188k - bigcode/commitpackft metrics: - code_eval library_name: transformers tags: - code model-index: - name: Refact-1.6B results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 (T=0.01) type: pass@1 value: 32.0 verified: false - name: pass@1 (T=0.2) type: pass@1 value: 31.5 verified: false - name: pass@10 (T=0.8) type: pass@10 value: 53.0 verified: false - name: pass@100 (T=0.8) type: pass@100 value: 76.9 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Python metrics: - name: pass@1 (T=0.2) type: pass@1 value: 35.8 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize JavaScript metrics: - name: pass@1 (T=0.2) type: pass@1 value: 31.6 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Java metrics: - name: pass@1 (T=0.2) type: pass@1 value: 29.1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Go metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize C++ metrics: - name: pass@1 (T=0.2) type: pass@1 value: 26.3 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Rust metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalSynthesize Average metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests Python metrics: - name: pass@1 (T=0.2) type: pass@1 value: 18.38 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests JavaScript metrics: - name: pass@1 (T=0.2) type: pass@1 value: 12.28 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests Java metrics: - name: pass@1 (T=0.2) type: pass@1 value: 15.12 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests Go metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests C++ metrics: - name: pass@1 (T=0.2) type: pass@1 value: 13.17 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests Rust metrics: - name: pass@1 (T=0.2) type: pass@1 value: 2.8 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixTests Average metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs Python metrics: - name: pass@1 (T=0.2) type: pass@1 value: 26.92 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs JavaScript metrics: - name: pass@1 (T=0.2) type: pass@1 value: 26.85 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs Java metrics: - name: pass@1 (T=0.2) type: pass@1 value: 30.76 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs Go metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs C++ metrics: - name: pass@1 (T=0.2) type: pass@1 value: 25.94 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs Rust metrics: - name: pass@1 (T=0.2) type: pass@1 value: 8.44 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalFixDocs Average metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Python metrics: - name: pass@1 (T=0.2) type: pass@1 value: 26.46 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain JavaScript metrics: - name: pass@1 (T=0.2) type: pass@1 value: 17.86 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Java metrics: - name: pass@1 (T=0.2) type: pass@1 value: 20.94 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Go metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain C++ metrics: - name: pass@1 (T=0.2) type: pass@1 value: 18.78 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Rust metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: bigcode/humanevalpack name: HumanEvalExplain Average metrics: - name: pass@1 (T=0.2) type: pass@1 value: -1 verified: false - task: type: text-generation dataset: type: mbpp name: MBPP metrics: - name: pass@1 (T=0.01) type: pass@1 value: 31.15 verified: false - task: type: text-generation dataset: type: ds1000 name: DS-1000 (Overall Completion) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 10.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (C++) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 21.61 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (C#) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 13.91 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (D) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 9.5 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Go) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 53.57 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Java) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 21.58 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Julia) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 13.75 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (JavaScript) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 26.88 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Lua) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 15.26 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (PHP) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 23.04 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Perl) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 12.1 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Python) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 29.6 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (R) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 13.77 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Ruby) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 12.68 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Racket) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 4.29 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Rust) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 19.54 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Scala) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 18.33 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Bash) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 5.7 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (Swift) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 17.68 verified: false - task: type: text-generation dataset: type: nuprl/MultiPL-E name: MultiPL-HumanEval (TypeScript) metrics: - name: pass@1 (T=0.2) type: pass@1 value: 25 verified: false language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643a9dd0c5f633a7fa7e804a/HkB0QYV0BbmB3ktMugbZy.png) # Refact-1.6B Finally, the model we started training with our [blog post](https://refact.ai/blog/2023/applying-recent-innovations-to-train-model/) is ready 🎉 After fine-tuning on generated data, it beats Replit 3b, Stability Code 3b and many other models. It almost beats StarCoder ten times the size! Model | Size | HumanEval pass@1 | HumanEval pass@10 | ----------------------|---------------|--------------------|--------------------| DeciCoder-1b | 1b | 19.1% | | <b>Refact-1.6-fim</b> | <b>1.6b</b> | <b>32.0%</b> | <b>53.0%</b> | StableCode | 3b | 20.2% | 33.8% | ReplitCode v1 | 3b | 21.9% | | CodeGen2.5-multi | 7b | 28.4% | 47.5% | CodeLlama | 7b | 33.5% | 59.6% | StarCoder | 15b | 33.6% | | Likely, it's the best model for practical use in your IDE for code completion because it's smart and fast! You can start using it right now by downloading the [Refact plugin](https://refact.ai/). You can host the model yourself, too, using the [open source docker container](https://github.com/smallcloudai/refact). And it's multi-language (see MultiPL-HumanEval and other metrics below) and it works as a chat (see the section below). # It Works As a Chat The primary application of this model is code completion (infill) in multiple programming languages. But it works as a chat quite well. HumanEval results using instruction following (chat) format, against models specialized for chat only: Model | Size | pass@1 | pass@10 | -----------------------|--------|----------|----------| <b>Refact-1.6-fim</b> | 1.6b | 38.4% | 55.6% | StableCode-instruct | 3b | 26.9% | 36.2% | OctoGeeX | 6b | 44.7% | | CodeLlama-instruct | 7b | 34.8% | 64.3% | CodeGen2.5-instruct | 7b | 36.2% | 60.87 | CodeLlama-instruct | 13b | 42.7% | 71.6% | StarChat-β | 15b | 33.5% | | OctoCoder | 15b | 46.2% | | # Example Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output: ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "smallcloudai/Refact-1_6B-fim" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device) prompt = '<fim_prefix>def print_hello_world():\n """<fim_suffix>\n print("Hello world!")<fim_middle>' inputs = tokenizer.encode(prompt, return_tensors="pt").to(device) outputs = model.generate(inputs, max_length=100, temperature=0.2) print("-"*80) print(tokenizer.decode(outputs[0])) ``` # Chat Format The same model works as chat (experimental). ```python prompt_template = "<empty_output>SYSTEM {system}\n" \ "<empty_output>USER {query}\n" \ "<empty_output>ASSISTANT" prompt = prompt_template.format(system="You are a programming assistant", query="How do I sort a list in Python?") ``` # Architecture As described in more detail in the blog post, we used: - [ALiBi](https://arxiv.org/abs/2108.12409) based attention - [LayerNorm](https://arxiv.org/abs/1607.06450v1) instead of [RMSNorm](https://arxiv.org/pdf/1910.07467.pdf) - [Multi Query Attention](https://arxiv.org/abs/1911.02150) We also used LiON, flash attention, early dropout. It's not that innovative that you can't run it, in fact you can -- see an example below. # Pretraining For the base model, we used our own dataset that contains code with permissive licenses only, and open text datasets. Filtering is the key to success of this model: - We only used text in English - Only topics related to computer science - Applied heavy deduplication The text to code proportion was 50:50, model trained for 1.2T tokens. We don't release the base model, because its Fill-in-the-Middle (FIM) capability likes to repeat itself too much, so its practical use is limited. But if you still want it, write us a message on Discord. # Finetuning We tested our hypothesis that chat data should boost base model performance in FIM and regular left-to-right code completion. We found that just 15% of open [code](https://huggingface.co/datasets/bigcode/commitpackft) [instruction-following](https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k) datasets, that we filtered for quality, improves almost all metrics. Additionally, to improve FIM, we observed common failure modes, and prepared a synthetic dataset based on [The Stack dedup v1.1](https://huggingface.co/datasets/bigcode/the-stack-dedup) to address them. There is a distribution shift between typical code on the internet, and the code you write in your IDE. The former is likely finished, so the model tries to come up with a suggestion that makes the code complete. You are likely to have half-written code as you work on it, there is no single addition that can repair it fully. In practice, model needs to have a tendency to stop after a couple of lines are added, and sometimes don't write anything at all. We found that just giving it empty completions, single line completions, multiline completions that end with a smaller text indent or at least a newline -- makes it much more usable. This data was used as the rest 85% of the finetune dataset. The final model is the result of several attempts to make it work as good as possible for code completion, and to perform well on a wide range of metrics. The best attempt took 40B tokens. # Limitations and Bias The Refact-1.6B model was trained on text in English. But it has seen a lot more languages in code comments. Its performance on non-English languages is lower, for sure. # Model Stats - **Architecture:** LLAMA-like model with multi-query attention - **Objectives** Fill-in-the-Middle, Chat - **Tokens context:** 4096 - **Pretraining tokens:** 1.2T - **Finetuning tokens:** 40B - **Precision:** bfloat16 - **GPUs** 64 NVidia A5000 - **Training time** 28 days # License The model is licensed under the BigScience OpenRAIL-M v1 license agreement # Citation If you are using this model, please give a link to this page.
Supabase/gte-small
Supabase
"2024-03-18T18:02:53Z"
377,366
66
transformers.js
[ "transformers.js", "pytorch", "onnx", "bert", "feature-extraction", "en", "license:mit", "region:us" ]
feature-extraction
"2023-08-01T17:50:33Z"
--- pipeline_tag: feature-extraction library_name: "transformers.js" language: - en license: mit --- _Fork of https://huggingface.co/thenlper/gte-small with ONNX weights to be compatible with Transformers.js. See [JavaScript usage](#javascript)._ --- # gte-small General Text Embeddings (GTE) model. The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Metrics Performance of GTE models were compared with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 | | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 | | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 | | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 | | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 | | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 | ## Usage This model can be used with both [Python](#python) and [JavaScript](#javascript). ### Python Use with [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/docs/stable/index.html): ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] tokenizer = AutoTokenizer.from_pretrained("Supabase/gte-small") model = AutoModel.from_pretrained("Supabase/gte-small") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with [sentence-transformers](https://www.sbert.net/): ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('Supabase/gte-small') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### JavaScript This model can be used with JavaScript via [Transformers.js](https://huggingface.co/docs/transformers.js/index). Use with [Deno](https://deno.land/manual/introduction) or [Supabase Edge Functions](https://supabase.com/docs/guides/functions): ```ts import { serve } from 'https://deno.land/std@0.168.0/http/server.ts' import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.0' // Configuration for Deno runtime env.useBrowserCache = false; env.allowLocalModels = false; const pipe = await pipeline( 'feature-extraction', 'Supabase/gte-small', ); serve(async (req) => { // Extract input string from JSON body const { input } = await req.json(); // Generate the embedding from the user input const output = await pipe(input, { pooling: 'mean', normalize: true, }); // Extract the embedding output const embedding = Array.from(output.data); // Return the embedding return new Response( JSON.stringify({ embedding }), { headers: { 'Content-Type': 'application/json' } } ); }); ``` Use within the browser ([JavaScript Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules)): ```html <script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.0'; const pipe = await pipeline( 'feature-extraction', 'Supabase/gte-small', ); // Generate the embedding from text const output = await pipe('Hello world', { pooling: 'mean', normalize: true, }); // Extract the embedding output const embedding = Array.from(output.data); console.log(embedding); </script> ``` Use within [Node.js](https://nodejs.org/en/docs) or a web bundler ([Webpack](https://webpack.js.org/concepts/), etc): ```js import { pipeline } from '@xenova/transformers'; const pipe = await pipeline( 'feature-extraction', 'Supabase/gte-small', ); // Generate the embedding from text const output = await pipe('Hello world', { pooling: 'mean', normalize: true, }); // Extract the embedding output const embedding = Array.from(output.data); console.log(embedding); ``` ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
facebook/nllb-200-distilled-600M
facebook
"2024-02-14T17:18:36Z"
377,311
514
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "nllb", "translation", "ace", "acm", "acq", "aeb", "af", "ajp", "ak", "als", "am", "apc", "ar", "ars", "ary", "arz", "as", "ast", "awa", "ayr", "azb", "azj", "ba", "bm", "ban", "be", "bem", "bn", "bho", "bjn", "bo", "bs", "bug", "bg", "ca", "ceb", "cs", "cjk", "ckb", "crh", "cy", "da", "de", "dik", "dyu", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fj", "fi", "fon", "fr", "fur", "fuv", "gaz", "gd", "ga", "gl", "gn", "gu", "ht", "ha", "he", "hi", "hne", "hr", "hu", "hy", "ig", "ilo", "id", "is", "it", "jv", "ja", "kab", "kac", "kam", "kn", "ks", "ka", "kk", "kbp", "kea", "khk", "km", "ki", "rw", "ky", "kmb", "kmr", "knc", "kg", "ko", "lo", "lij", "li", "ln", "lt", "lmo", "ltg", "lb", "lua", "lg", "luo", "lus", "lvs", "mag", "mai", "ml", "mar", "min", "mk", "mt", "mni", "mos", "mi", "my", "nl", "nn", "nb", "npi", "nso", "nus", "ny", "oc", "ory", "pag", "pa", "pap", "pbt", "pes", "plt", "pl", "pt", "prs", "quy", "ro", "rn", "ru", "sg", "sa", "sat", "scn", "shn", "si", "sk", "sl", "sm", "sn", "sd", "so", "st", "es", "sc", "sr", "ss", "su", "sv", "swh", "szl", "ta", "taq", "tt", "te", "tg", "tl", "th", "ti", "tpi", "tn", "ts", "tk", "tum", "tr", "tw", "tzm", "ug", "uk", "umb", "ur", "uzn", "vec", "vi", "war", "wo", "xh", "ydd", "yo", "yue", "zh", "zsm", "zu", "dataset:flores-200", "license:cc-by-nc-4.0", "autotrain_compatible", "region:us" ]
translation
"2022-07-08T09:43:57Z"
--- language: - ace - acm - acq - aeb - af - ajp - ak - als - am - apc - ar - ars - ary - arz - as - ast - awa - ayr - azb - azj - ba - bm - ban - be - bem - bn - bho - bjn - bo - bs - bug - bg - ca - ceb - cs - cjk - ckb - crh - cy - da - de - dik - dyu - dz - el - en - eo - et - eu - ee - fo - fj - fi - fon - fr - fur - fuv - gaz - gd - ga - gl - gn - gu - ht - ha - he - hi - hne - hr - hu - hy - ig - ilo - id - is - it - jv - ja - kab - kac - kam - kn - ks - ka - kk - kbp - kea - khk - km - ki - rw - ky - kmb - kmr - knc - kg - ko - lo - lij - li - ln - lt - lmo - ltg - lb - lua - lg - luo - lus - lvs - mag - mai - ml - mar - min - mk - mt - mni - mos - mi - my - nl - nn - nb - npi - nso - nus - ny - oc - ory - pag - pa - pap - pbt - pes - plt - pl - pt - prs - quy - ro - rn - ru - sg - sa - sat - scn - shn - si - sk - sl - sm - sn - sd - so - st - es - sc - sr - ss - su - sv - swh - szl - ta - taq - tt - te - tg - tl - th - ti - tpi - tn - ts - tk - tum - tr - tw - tzm - ug - uk - umb - ur - uzn - vec - vi - war - wo - xh - ydd - yo - yue - zh - zsm - zu language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn" pipeline_tag: translation tags: - nllb license: "cc-by-nc-4.0" datasets: - flores-200 metrics: - bleu - spbleu - chrf++ inference: false --- # NLLB-200 This is the model card of NLLB-200's distilled 600M variant. Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper. - Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022 - License: CC-BY-NC - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues ## Intended Use - Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data. - Primary intended users: Primary users are researchers and machine translation research community. - Out-of-scope use cases: NLLB-200 is a research model and is not released for production deployment. NLLB-200 is trained on general domain text data and is not intended to be used with domain specific texts, such as medical domain or legal domain. The model is not intended to be used for document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation. NLLB-200 translations can not be used as certified translations. ## Metrics • Model performance measures: NLLB-200 model was evaluated using BLEU, spBLEU, and chrF++ metrics widely adopted by machine translation community. Additionally, we performed human evaluation with the XSTS protocol and measured the toxicity of the generated translations. ## Evaluation Data - Datasets: Flores-200 dataset is described in Section 4 - Motivation: We used Flores-200 as it provides full evaluation coverage of the languages in NLLB-200 - Preprocessing: Sentence-split raw text data was preprocessed using SentencePiece. The SentencePiece model is released along with NLLB-200. ## Training Data • We used parallel multilingual data from a variety of sources to train the model. We provide detailed report on data selection and construction process in Section 5 in the paper. We also used monolingual data constructed from Common Crawl. We provide more details in Section 5.2. ## Ethical Considerations • In this work, we took a reflexive approach in technological development to ensure that we prioritize human users and minimize risks that could be transferred to them. While we reflect on our ethical considerations throughout the article, here are some additional points to highlight. For one, many languages chosen for this study are low-resource languages, with a heavy emphasis on African languages. While quality translation could improve education and information access in many in these communities, such an access could also make groups with lower levels of digital literacy more vulnerable to misinformation or online scams. The latter scenarios could arise if bad actors misappropriate our work for nefarious activities, which we conceive as an example of unintended use. Regarding data acquisition, the training data used for model development were mined from various publicly available sources on the web. Although we invested heavily in data cleaning, personally identifiable information may not be entirely eliminated. Finally, although we did our best to optimize for translation quality, mistranslations produced by the model could remain. Although the odds are low, this could have adverse impact on those who rely on these translations to make important decisions (particularly when related to health and safety). ## Caveats and Recommendations • Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments. ## Carbon Footprint Details • The carbon dioxide (CO2e) estimate is reported in Section 8.8.
Systran/faster-whisper-base
Systran
"2023-11-23T11:02:28Z"
374,923
9
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
"2023-11-23T09:52:40Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper base model for CTranslate2 This repository contains the conversion of [openai/whisper-base](https://huggingface.co/openai/whisper-base) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("base") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-base --output_dir faster-whisper-base \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base).**
echarlaix/tiny-random-mistral
echarlaix
"2023-10-06T09:06:13Z"
372,881
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-06T08:53:48Z"
--- license: apache-2.0 ---
google/vit-large-patch32-384
google
"2022-01-28T10:24:24Z"
371,567
14
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384') model = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
avsolatorio/GIST-large-Embedding-v0
avsolatorio
"2024-02-28T00:34:23Z"
367,499
11
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "mteb", "sentence-similarity", "en", "arxiv:2402.16829", "arxiv:2212.09741", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-02-14T18:26:25Z"
--- language: - en library_name: sentence-transformers license: mit pipeline_tag: sentence-similarity tags: - feature-extraction - mteb - sentence-similarity - sentence-transformers model-index: - name: GIST-large-Embedding-v0 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.5820895522388 - type: ap value: 38.32190121241783 - type: f1 value: 69.44777155231054 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.40514999999998 - type: ap value: 90.2011565132406 - type: f1 value: 93.39486246843605 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.05999999999999 - type: f1 value: 48.58702718571088 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 38.407000000000004 - type: map_at_10 value: 54.822 - type: map_at_100 value: 55.387 - type: map_at_1000 value: 55.388999999999996 - type: map_at_3 value: 50.308 - type: map_at_5 value: 53.199 - type: mrr_at_1 value: 39.900000000000006 - type: mrr_at_10 value: 55.385 - type: mrr_at_100 value: 55.936 - type: mrr_at_1000 value: 55.93900000000001 - type: mrr_at_3 value: 50.853 - type: mrr_at_5 value: 53.738 - type: ndcg_at_1 value: 38.407000000000004 - type: ndcg_at_10 value: 63.38 - type: ndcg_at_100 value: 65.52900000000001 - type: ndcg_at_1000 value: 65.58800000000001 - type: ndcg_at_3 value: 54.26 - type: ndcg_at_5 value: 59.488 - type: precision_at_1 value: 38.407000000000004 - type: precision_at_10 value: 9.04 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 21.906 - type: precision_at_5 value: 15.690000000000001 - type: recall_at_1 value: 38.407000000000004 - type: recall_at_10 value: 90.398 - type: recall_at_100 value: 99.21799999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 65.718 - type: recall_at_5 value: 78.45 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.49766333679089 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.57731111438094 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.70120072857361 - type: mrr value: 77.86714593501297 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 90.73821860690765 - type: cos_sim_spearman value: 89.17070651383446 - type: euclidean_pearson value: 88.28303958293029 - type: euclidean_spearman value: 88.81889126856979 - type: manhattan_pearson value: 88.09080621828731 - type: manhattan_spearman value: 88.55924679817751 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.10064935064933 - type: f1 value: 88.08460758973867 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.338228337929976 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.179156232378226 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.440999999999995 - type: map_at_10 value: 45.495000000000005 - type: map_at_100 value: 47.132000000000005 - type: map_at_1000 value: 47.253 - type: map_at_3 value: 41.766 - type: map_at_5 value: 43.873 - type: mrr_at_1 value: 40.772999999999996 - type: mrr_at_10 value: 51.627 - type: mrr_at_100 value: 52.364 - type: mrr_at_1000 value: 52.397000000000006 - type: mrr_at_3 value: 48.951 - type: mrr_at_5 value: 50.746 - type: ndcg_at_1 value: 40.772999999999996 - type: ndcg_at_10 value: 52.306 - type: ndcg_at_100 value: 57.753 - type: ndcg_at_1000 value: 59.36900000000001 - type: ndcg_at_3 value: 47.177 - type: ndcg_at_5 value: 49.71 - type: precision_at_1 value: 40.772999999999996 - type: precision_at_10 value: 10.129000000000001 - type: precision_at_100 value: 1.617 - type: precision_at_1000 value: 0.208 - type: precision_at_3 value: 22.985 - type: precision_at_5 value: 16.652 - type: recall_at_1 value: 33.440999999999995 - type: recall_at_10 value: 65.121 - type: recall_at_100 value: 87.55199999999999 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 49.958999999999996 - type: recall_at_5 value: 57.14900000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.126 - type: map_at_10 value: 42.856 - type: map_at_100 value: 44.134 - type: map_at_1000 value: 44.274 - type: map_at_3 value: 39.594 - type: map_at_5 value: 41.504999999999995 - type: mrr_at_1 value: 40.127 - type: mrr_at_10 value: 48.736000000000004 - type: mrr_at_100 value: 49.303999999999995 - type: mrr_at_1000 value: 49.356 - type: mrr_at_3 value: 46.263 - type: mrr_at_5 value: 47.878 - type: ndcg_at_1 value: 40.127 - type: ndcg_at_10 value: 48.695 - type: ndcg_at_100 value: 52.846000000000004 - type: ndcg_at_1000 value: 54.964 - type: ndcg_at_3 value: 44.275 - type: ndcg_at_5 value: 46.54 - type: precision_at_1 value: 40.127 - type: precision_at_10 value: 9.229 - type: precision_at_100 value: 1.473 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.444 - type: precision_at_5 value: 15.389 - type: recall_at_1 value: 32.126 - type: recall_at_10 value: 58.971 - type: recall_at_100 value: 76.115 - type: recall_at_1000 value: 89.556 - type: recall_at_3 value: 45.891 - type: recall_at_5 value: 52.242 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 41.312 - type: map_at_10 value: 54.510000000000005 - type: map_at_100 value: 55.544000000000004 - type: map_at_1000 value: 55.593 - type: map_at_3 value: 50.859 - type: map_at_5 value: 52.839999999999996 - type: mrr_at_1 value: 47.147 - type: mrr_at_10 value: 57.678 - type: mrr_at_100 value: 58.287 - type: mrr_at_1000 value: 58.312 - type: mrr_at_3 value: 55.025999999999996 - type: mrr_at_5 value: 56.55 - type: ndcg_at_1 value: 47.147 - type: ndcg_at_10 value: 60.672000000000004 - type: ndcg_at_100 value: 64.411 - type: ndcg_at_1000 value: 65.35499999999999 - type: ndcg_at_3 value: 54.643 - type: ndcg_at_5 value: 57.461 - type: precision_at_1 value: 47.147 - type: precision_at_10 value: 9.881 - type: precision_at_100 value: 1.27 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 24.556 - type: precision_at_5 value: 16.814999999999998 - type: recall_at_1 value: 41.312 - type: recall_at_10 value: 75.62299999999999 - type: recall_at_100 value: 91.388 - type: recall_at_1000 value: 98.08 - type: recall_at_3 value: 59.40299999999999 - type: recall_at_5 value: 66.43900000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.609 - type: map_at_10 value: 37.614 - type: map_at_100 value: 38.584 - type: map_at_1000 value: 38.652 - type: map_at_3 value: 34.731 - type: map_at_5 value: 36.308 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.829 - type: mrr_at_100 value: 40.659 - type: mrr_at_1000 value: 40.709 - type: mrr_at_3 value: 37.269000000000005 - type: mrr_at_5 value: 38.625 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.082 - type: ndcg_at_100 value: 47.857 - type: ndcg_at_1000 value: 49.612 - type: ndcg_at_3 value: 37.578 - type: ndcg_at_5 value: 40.135 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.678000000000001 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 16.045 - type: precision_at_5 value: 11.073 - type: recall_at_1 value: 27.609 - type: recall_at_10 value: 57.718 - type: recall_at_100 value: 79.768 - type: recall_at_1000 value: 92.868 - type: recall_at_3 value: 42.876 - type: recall_at_5 value: 49.104 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.071 - type: map_at_10 value: 27.471 - type: map_at_100 value: 28.71 - type: map_at_1000 value: 28.833 - type: map_at_3 value: 24.698 - type: map_at_5 value: 26.461000000000002 - type: mrr_at_1 value: 22.387999999999998 - type: mrr_at_10 value: 32.522 - type: mrr_at_100 value: 33.393 - type: mrr_at_1000 value: 33.455 - type: mrr_at_3 value: 29.830000000000002 - type: mrr_at_5 value: 31.472 - type: ndcg_at_1 value: 22.387999999999998 - type: ndcg_at_10 value: 33.278999999999996 - type: ndcg_at_100 value: 39.043 - type: ndcg_at_1000 value: 41.763 - type: ndcg_at_3 value: 28.310999999999996 - type: ndcg_at_5 value: 31.007 - type: precision_at_1 value: 22.387999999999998 - type: precision_at_10 value: 6.157 - type: precision_at_100 value: 1.042 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 13.972000000000001 - type: precision_at_5 value: 10.274 - type: recall_at_1 value: 18.071 - type: recall_at_10 value: 46.025 - type: recall_at_100 value: 71.153 - type: recall_at_1000 value: 90.232 - type: recall_at_3 value: 32.311 - type: recall_at_5 value: 39.296 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.813000000000002 - type: map_at_10 value: 42.594 - type: map_at_100 value: 43.949 - type: map_at_1000 value: 44.052 - type: map_at_3 value: 39.1 - type: map_at_5 value: 41.111 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 48.06 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.946 - type: mrr_at_3 value: 45.509 - type: mrr_at_5 value: 47.073 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 48.882 - type: ndcg_at_100 value: 54.330999999999996 - type: ndcg_at_1000 value: 56.120999999999995 - type: ndcg_at_3 value: 43.529 - type: ndcg_at_5 value: 46.217999999999996 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.845 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 20.757 - type: precision_at_5 value: 14.802999999999999 - type: recall_at_1 value: 30.813000000000002 - type: recall_at_10 value: 61.895999999999994 - type: recall_at_100 value: 84.513 - type: recall_at_1000 value: 95.817 - type: recall_at_3 value: 47.099000000000004 - type: recall_at_5 value: 54.031 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.735999999999997 - type: map_at_10 value: 36.799 - type: map_at_100 value: 38.246 - type: map_at_1000 value: 38.353 - type: map_at_3 value: 33.133 - type: map_at_5 value: 34.954 - type: mrr_at_1 value: 31.849 - type: mrr_at_10 value: 41.928 - type: mrr_at_100 value: 42.846000000000004 - type: mrr_at_1000 value: 42.894 - type: mrr_at_3 value: 39.117000000000004 - type: mrr_at_5 value: 40.521 - type: ndcg_at_1 value: 31.849 - type: ndcg_at_10 value: 43.143 - type: ndcg_at_100 value: 48.963 - type: ndcg_at_1000 value: 51.041000000000004 - type: ndcg_at_3 value: 37.218 - type: ndcg_at_5 value: 39.542 - type: precision_at_1 value: 31.849 - type: precision_at_10 value: 8.231 - type: precision_at_100 value: 1.277 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 18.037 - type: precision_at_5 value: 12.945 - type: recall_at_1 value: 25.735999999999997 - type: recall_at_10 value: 56.735 - type: recall_at_100 value: 81.04 - type: recall_at_1000 value: 94.845 - type: recall_at_3 value: 40.239999999999995 - type: recall_at_5 value: 46.378 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.580333333333336 - type: map_at_10 value: 37.70558333333334 - type: map_at_100 value: 38.94941666666667 - type: map_at_1000 value: 39.062083333333334 - type: map_at_3 value: 34.63333333333334 - type: map_at_5 value: 36.35241666666666 - type: mrr_at_1 value: 32.64866666666667 - type: mrr_at_10 value: 42.018499999999996 - type: mrr_at_100 value: 42.83391666666666 - type: mrr_at_1000 value: 42.884166666666665 - type: mrr_at_3 value: 39.476499999999994 - type: mrr_at_5 value: 40.96983333333334 - type: ndcg_at_1 value: 32.64866666666667 - type: ndcg_at_10 value: 43.43866666666667 - type: ndcg_at_100 value: 48.569833333333335 - type: ndcg_at_1000 value: 50.6495 - type: ndcg_at_3 value: 38.327166666666656 - type: ndcg_at_5 value: 40.76941666666667 - type: precision_at_1 value: 32.64866666666667 - type: precision_at_10 value: 7.652333333333332 - type: precision_at_100 value: 1.2066666666666666 - type: precision_at_1000 value: 0.15841666666666668 - type: precision_at_3 value: 17.75108333333333 - type: precision_at_5 value: 12.641916666666669 - type: recall_at_1 value: 27.580333333333336 - type: recall_at_10 value: 56.02591666666667 - type: recall_at_100 value: 78.317 - type: recall_at_1000 value: 92.52608333333332 - type: recall_at_3 value: 41.84283333333333 - type: recall_at_5 value: 48.105666666666664 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.876 - type: map_at_10 value: 34.521 - type: map_at_100 value: 35.581 - type: map_at_1000 value: 35.674 - type: map_at_3 value: 32.501000000000005 - type: map_at_5 value: 33.602 - type: mrr_at_1 value: 31.441999999999997 - type: mrr_at_10 value: 37.669999999999995 - type: mrr_at_100 value: 38.523 - type: mrr_at_1000 value: 38.59 - type: mrr_at_3 value: 35.762 - type: mrr_at_5 value: 36.812 - type: ndcg_at_1 value: 31.441999999999997 - type: ndcg_at_10 value: 38.46 - type: ndcg_at_100 value: 43.479 - type: ndcg_at_1000 value: 45.858 - type: ndcg_at_3 value: 34.668 - type: ndcg_at_5 value: 36.416 - type: precision_at_1 value: 31.441999999999997 - type: precision_at_10 value: 5.782 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 14.417 - type: precision_at_5 value: 9.876999999999999 - type: recall_at_1 value: 27.876 - type: recall_at_10 value: 47.556 - type: recall_at_100 value: 70.39699999999999 - type: recall_at_1000 value: 87.969 - type: recall_at_3 value: 37.226 - type: recall_at_5 value: 41.43 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.854000000000003 - type: map_at_10 value: 26.632 - type: map_at_100 value: 27.849 - type: map_at_1000 value: 27.977 - type: map_at_3 value: 24.089 - type: map_at_5 value: 25.477 - type: mrr_at_1 value: 22.987 - type: mrr_at_10 value: 30.781999999999996 - type: mrr_at_100 value: 31.746000000000002 - type: mrr_at_1000 value: 31.818 - type: mrr_at_3 value: 28.43 - type: mrr_at_5 value: 29.791 - type: ndcg_at_1 value: 22.987 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.32 - type: ndcg_at_1000 value: 40.072 - type: ndcg_at_3 value: 27.058 - type: ndcg_at_5 value: 29.137999999999998 - type: precision_at_1 value: 22.987 - type: precision_at_10 value: 5.76 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.14400000000000002 - type: precision_at_3 value: 12.767000000000001 - type: precision_at_5 value: 9.257 - type: recall_at_1 value: 18.854000000000003 - type: recall_at_10 value: 42.349 - type: recall_at_100 value: 68.15299999999999 - type: recall_at_1000 value: 87.44 - type: recall_at_3 value: 29.715999999999998 - type: recall_at_5 value: 35.085 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.094 - type: map_at_10 value: 38.22 - type: map_at_100 value: 39.352 - type: map_at_1000 value: 39.452 - type: map_at_3 value: 35.339 - type: map_at_5 value: 36.78 - type: mrr_at_1 value: 33.022 - type: mrr_at_10 value: 42.466 - type: mrr_at_100 value: 43.3 - type: mrr_at_1000 value: 43.356 - type: mrr_at_3 value: 40.159 - type: mrr_at_5 value: 41.272999999999996 - type: ndcg_at_1 value: 33.022 - type: ndcg_at_10 value: 43.976 - type: ndcg_at_100 value: 49.008 - type: ndcg_at_1000 value: 51.154999999999994 - type: ndcg_at_3 value: 38.891 - type: ndcg_at_5 value: 40.897 - type: precision_at_1 value: 33.022 - type: precision_at_10 value: 7.396999999999999 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 17.724 - type: precision_at_5 value: 12.239 - type: recall_at_1 value: 28.094 - type: recall_at_10 value: 57.162 - type: recall_at_100 value: 78.636 - type: recall_at_1000 value: 93.376 - type: recall_at_3 value: 43.328 - type: recall_at_5 value: 48.252 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.937 - type: map_at_10 value: 34.82 - type: map_at_100 value: 36.405 - type: map_at_1000 value: 36.626 - type: map_at_3 value: 31.548 - type: map_at_5 value: 33.355000000000004 - type: mrr_at_1 value: 30.435000000000002 - type: mrr_at_10 value: 39.946 - type: mrr_at_100 value: 40.873 - type: mrr_at_1000 value: 40.910000000000004 - type: mrr_at_3 value: 37.088 - type: mrr_at_5 value: 38.808 - type: ndcg_at_1 value: 30.435000000000002 - type: ndcg_at_10 value: 41.25 - type: ndcg_at_100 value: 47.229 - type: ndcg_at_1000 value: 49.395 - type: ndcg_at_3 value: 35.801 - type: ndcg_at_5 value: 38.457 - type: precision_at_1 value: 30.435000000000002 - type: precision_at_10 value: 8.083 - type: precision_at_100 value: 1.601 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 17.061999999999998 - type: precision_at_5 value: 12.767000000000001 - type: recall_at_1 value: 24.937 - type: recall_at_10 value: 53.905 - type: recall_at_100 value: 80.607 - type: recall_at_1000 value: 93.728 - type: recall_at_3 value: 38.446000000000005 - type: recall_at_5 value: 45.188 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.095000000000002 - type: map_at_10 value: 30.935000000000002 - type: map_at_100 value: 31.907000000000004 - type: map_at_1000 value: 32.006 - type: map_at_3 value: 28.242 - type: map_at_5 value: 29.963 - type: mrr_at_1 value: 23.845 - type: mrr_at_10 value: 32.978 - type: mrr_at_100 value: 33.802 - type: mrr_at_1000 value: 33.867000000000004 - type: mrr_at_3 value: 30.314000000000004 - type: mrr_at_5 value: 32.089 - type: ndcg_at_1 value: 23.845 - type: ndcg_at_10 value: 35.934 - type: ndcg_at_100 value: 40.598 - type: ndcg_at_1000 value: 43.089 - type: ndcg_at_3 value: 30.776999999999997 - type: ndcg_at_5 value: 33.711999999999996 - type: precision_at_1 value: 23.845 - type: precision_at_10 value: 5.656 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 13.247 - type: precision_at_5 value: 9.612 - type: recall_at_1 value: 22.095000000000002 - type: recall_at_10 value: 49.25 - type: recall_at_100 value: 70.482 - type: recall_at_1000 value: 88.98899999999999 - type: recall_at_3 value: 35.619 - type: recall_at_5 value: 42.674 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 14.154 - type: map_at_10 value: 24.654999999999998 - type: map_at_100 value: 26.723999999999997 - type: map_at_1000 value: 26.912000000000003 - type: map_at_3 value: 20.4 - type: map_at_5 value: 22.477 - type: mrr_at_1 value: 32.117000000000004 - type: mrr_at_10 value: 44.590999999999994 - type: mrr_at_100 value: 45.425 - type: mrr_at_1000 value: 45.456 - type: mrr_at_3 value: 41.281 - type: mrr_at_5 value: 43.219 - type: ndcg_at_1 value: 32.117000000000004 - type: ndcg_at_10 value: 33.994 - type: ndcg_at_100 value: 41.438 - type: ndcg_at_1000 value: 44.611000000000004 - type: ndcg_at_3 value: 27.816000000000003 - type: ndcg_at_5 value: 29.816 - type: precision_at_1 value: 32.117000000000004 - type: precision_at_10 value: 10.756 - type: precision_at_100 value: 1.8679999999999999 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 20.803 - type: precision_at_5 value: 15.987000000000002 - type: recall_at_1 value: 14.154 - type: recall_at_10 value: 40.489999999999995 - type: recall_at_100 value: 65.635 - type: recall_at_1000 value: 83.276 - type: recall_at_3 value: 25.241000000000003 - type: recall_at_5 value: 31.211 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.332 - type: map_at_10 value: 20.462 - type: map_at_100 value: 29.473 - type: map_at_1000 value: 31.215 - type: map_at_3 value: 14.466999999999999 - type: map_at_5 value: 16.922 - type: mrr_at_1 value: 69.5 - type: mrr_at_10 value: 77.039 - type: mrr_at_100 value: 77.265 - type: mrr_at_1000 value: 77.271 - type: mrr_at_3 value: 75.5 - type: mrr_at_5 value: 76.4 - type: ndcg_at_1 value: 57.125 - type: ndcg_at_10 value: 42.958 - type: ndcg_at_100 value: 48.396 - type: ndcg_at_1000 value: 55.897 - type: ndcg_at_3 value: 47.188 - type: ndcg_at_5 value: 44.376 - type: precision_at_1 value: 69.5 - type: precision_at_10 value: 34.5 - type: precision_at_100 value: 11.18 - type: precision_at_1000 value: 2.13 - type: precision_at_3 value: 51.083 - type: precision_at_5 value: 43.1 - type: recall_at_1 value: 9.332 - type: recall_at_10 value: 26.422 - type: recall_at_100 value: 56.098000000000006 - type: recall_at_1000 value: 79.66 - type: recall_at_3 value: 15.703 - type: recall_at_5 value: 19.644000000000002 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 54.72 - type: f1 value: 49.67819606587526 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 74.97 - type: map_at_10 value: 82.956 - type: map_at_100 value: 83.193 - type: map_at_1000 value: 83.208 - type: map_at_3 value: 81.837 - type: map_at_5 value: 82.57 - type: mrr_at_1 value: 80.783 - type: mrr_at_10 value: 87.546 - type: mrr_at_100 value: 87.627 - type: mrr_at_1000 value: 87.63 - type: mrr_at_3 value: 86.79400000000001 - type: mrr_at_5 value: 87.32799999999999 - type: ndcg_at_1 value: 80.783 - type: ndcg_at_10 value: 86.54899999999999 - type: ndcg_at_100 value: 87.355 - type: ndcg_at_1000 value: 87.629 - type: ndcg_at_3 value: 84.82 - type: ndcg_at_5 value: 85.83800000000001 - type: precision_at_1 value: 80.783 - type: precision_at_10 value: 10.327 - type: precision_at_100 value: 1.094 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.218 - type: precision_at_5 value: 20.012 - type: recall_at_1 value: 74.97 - type: recall_at_10 value: 93.072 - type: recall_at_100 value: 96.218 - type: recall_at_1000 value: 97.991 - type: recall_at_3 value: 88.357 - type: recall_at_5 value: 90.983 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 21.12 - type: map_at_10 value: 35.908 - type: map_at_100 value: 37.895 - type: map_at_1000 value: 38.068000000000005 - type: map_at_3 value: 31.189 - type: map_at_5 value: 33.908 - type: mrr_at_1 value: 42.901 - type: mrr_at_10 value: 52.578 - type: mrr_at_100 value: 53.308 - type: mrr_at_1000 value: 53.342 - type: mrr_at_3 value: 50.385999999999996 - type: mrr_at_5 value: 51.62799999999999 - type: ndcg_at_1 value: 42.901 - type: ndcg_at_10 value: 44.302 - type: ndcg_at_100 value: 51.132999999999996 - type: ndcg_at_1000 value: 53.848 - type: ndcg_at_3 value: 40.464 - type: ndcg_at_5 value: 41.743 - type: precision_at_1 value: 42.901 - type: precision_at_10 value: 12.423 - type: precision_at_100 value: 1.968 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 27.622999999999998 - type: precision_at_5 value: 20.278 - type: recall_at_1 value: 21.12 - type: recall_at_10 value: 52.091 - type: recall_at_100 value: 77.062 - type: recall_at_1000 value: 93.082 - type: recall_at_3 value: 37.223 - type: recall_at_5 value: 43.826 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 38.940000000000005 - type: map_at_10 value: 62.239999999999995 - type: map_at_100 value: 63.141000000000005 - type: map_at_1000 value: 63.205999999999996 - type: map_at_3 value: 58.738 - type: map_at_5 value: 60.924 - type: mrr_at_1 value: 77.88000000000001 - type: mrr_at_10 value: 83.7 - type: mrr_at_100 value: 83.882 - type: mrr_at_1000 value: 83.889 - type: mrr_at_3 value: 82.748 - type: mrr_at_5 value: 83.381 - type: ndcg_at_1 value: 77.88000000000001 - type: ndcg_at_10 value: 70.462 - type: ndcg_at_100 value: 73.564 - type: ndcg_at_1000 value: 74.78099999999999 - type: ndcg_at_3 value: 65.524 - type: ndcg_at_5 value: 68.282 - type: precision_at_1 value: 77.88000000000001 - type: precision_at_10 value: 14.81 - type: precision_at_100 value: 1.7229999999999999 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 42.083999999999996 - type: precision_at_5 value: 27.43 - type: recall_at_1 value: 38.940000000000005 - type: recall_at_10 value: 74.051 - type: recall_at_100 value: 86.158 - type: recall_at_1000 value: 94.146 - type: recall_at_3 value: 63.126000000000005 - type: recall_at_5 value: 68.575 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 91.23440000000001 - type: ap value: 87.33490392265892 - type: f1 value: 91.21374626021836 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.137999999999998 - type: map_at_10 value: 34.471000000000004 - type: map_at_100 value: 35.634 - type: map_at_1000 value: 35.685 - type: map_at_3 value: 30.587999999999997 - type: map_at_5 value: 32.812999999999995 - type: mrr_at_1 value: 22.736 - type: mrr_at_10 value: 35.092 - type: mrr_at_100 value: 36.193999999999996 - type: mrr_at_1000 value: 36.238 - type: mrr_at_3 value: 31.28 - type: mrr_at_5 value: 33.498 - type: ndcg_at_1 value: 22.736 - type: ndcg_at_10 value: 41.388999999999996 - type: ndcg_at_100 value: 46.967999999999996 - type: ndcg_at_1000 value: 48.178 - type: ndcg_at_3 value: 33.503 - type: ndcg_at_5 value: 37.484 - type: precision_at_1 value: 22.736 - type: precision_at_10 value: 6.54 - type: precision_at_100 value: 0.9339999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.249999999999998 - type: precision_at_5 value: 10.562000000000001 - type: recall_at_1 value: 22.137999999999998 - type: recall_at_10 value: 62.629999999999995 - type: recall_at_100 value: 88.375 - type: recall_at_1000 value: 97.529 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.808 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.25079799361606 - type: f1 value: 95.00726023695032 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 78.23757409940721 - type: f1 value: 58.534958803195714 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.20040349697378 - type: f1 value: 74.31261149784696 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35104236718227 - type: f1 value: 79.7373049864316 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.478828180753126 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.25696147904426 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.82488548405117 - type: mrr value: 34.066706809031096 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.557 - type: map_at_10 value: 15.055 - type: map_at_100 value: 19.575 - type: map_at_1000 value: 21.267 - type: map_at_3 value: 10.86 - type: map_at_5 value: 12.83 - type: mrr_at_1 value: 50.464 - type: mrr_at_10 value: 59.050999999999995 - type: mrr_at_100 value: 59.436 - type: mrr_at_1000 value: 59.476 - type: mrr_at_3 value: 56.811 - type: mrr_at_5 value: 58.08 - type: ndcg_at_1 value: 47.988 - type: ndcg_at_10 value: 38.645 - type: ndcg_at_100 value: 36.339 - type: ndcg_at_1000 value: 45.279 - type: ndcg_at_3 value: 43.35 - type: ndcg_at_5 value: 41.564 - type: precision_at_1 value: 49.845 - type: precision_at_10 value: 28.544999999999998 - type: precision_at_100 value: 9.322 - type: precision_at_1000 value: 2.258 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.913000000000004 - type: recall_at_1 value: 6.557 - type: recall_at_10 value: 19.5 - type: recall_at_100 value: 37.153999999999996 - type: recall_at_1000 value: 69.581 - type: recall_at_3 value: 12.133 - type: recall_at_5 value: 15.43 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 31.740000000000002 - type: map_at_10 value: 48.150999999999996 - type: map_at_100 value: 49.125 - type: map_at_1000 value: 49.149 - type: map_at_3 value: 43.645 - type: map_at_5 value: 46.417 - type: mrr_at_1 value: 35.892 - type: mrr_at_10 value: 50.524 - type: mrr_at_100 value: 51.232 - type: mrr_at_1000 value: 51.24999999999999 - type: mrr_at_3 value: 46.852 - type: mrr_at_5 value: 49.146 - type: ndcg_at_1 value: 35.892 - type: ndcg_at_10 value: 56.08800000000001 - type: ndcg_at_100 value: 60.077000000000005 - type: ndcg_at_1000 value: 60.632 - type: ndcg_at_3 value: 47.765 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 35.892 - type: precision_at_10 value: 9.296 - type: precision_at_100 value: 1.154 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.92 - type: precision_at_5 value: 15.781999999999998 - type: recall_at_1 value: 31.740000000000002 - type: recall_at_10 value: 77.725 - type: recall_at_100 value: 94.841 - type: recall_at_1000 value: 99.003 - type: recall_at_3 value: 56.407 - type: recall_at_5 value: 66.848 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.429 - type: map_at_10 value: 85.42699999999999 - type: map_at_100 value: 86.063 - type: map_at_1000 value: 86.077 - type: map_at_3 value: 82.573 - type: map_at_5 value: 84.371 - type: mrr_at_1 value: 82.34 - type: mrr_at_10 value: 88.247 - type: mrr_at_100 value: 88.357 - type: mrr_at_1000 value: 88.357 - type: mrr_at_3 value: 87.38 - type: mrr_at_5 value: 87.981 - type: ndcg_at_1 value: 82.34 - type: ndcg_at_10 value: 88.979 - type: ndcg_at_100 value: 90.18599999999999 - type: ndcg_at_1000 value: 90.254 - type: ndcg_at_3 value: 86.378 - type: ndcg_at_5 value: 87.821 - type: precision_at_1 value: 82.34 - type: precision_at_10 value: 13.482 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.852999999999994 - type: precision_at_5 value: 24.798000000000002 - type: recall_at_1 value: 71.429 - type: recall_at_10 value: 95.64099999999999 - type: recall_at_100 value: 99.723 - type: recall_at_1000 value: 99.98 - type: recall_at_3 value: 88.011 - type: recall_at_5 value: 92.246 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.62148584103299 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.2923987272903 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.128 - type: map_at_10 value: 14.63 - type: map_at_100 value: 17.285 - type: map_at_1000 value: 17.676 - type: map_at_3 value: 9.993 - type: map_at_5 value: 12.286999999999999 - type: mrr_at_1 value: 25.4 - type: mrr_at_10 value: 38.423 - type: mrr_at_100 value: 39.497 - type: mrr_at_1000 value: 39.531 - type: mrr_at_3 value: 34.9 - type: mrr_at_5 value: 37.01 - type: ndcg_at_1 value: 25.4 - type: ndcg_at_10 value: 24.062 - type: ndcg_at_100 value: 33.823 - type: ndcg_at_1000 value: 39.663 - type: ndcg_at_3 value: 22.246 - type: ndcg_at_5 value: 19.761 - type: precision_at_1 value: 25.4 - type: precision_at_10 value: 12.85 - type: precision_at_100 value: 2.71 - type: precision_at_1000 value: 0.41000000000000003 - type: precision_at_3 value: 21.4 - type: precision_at_5 value: 17.86 - type: recall_at_1 value: 5.128 - type: recall_at_10 value: 26.06 - type: recall_at_100 value: 54.993 - type: recall_at_1000 value: 83.165 - type: recall_at_3 value: 13.003 - type: recall_at_5 value: 18.117 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.5466779326323 - type: cos_sim_spearman value: 82.79782085421951 - type: euclidean_pearson value: 84.76929982677339 - type: euclidean_spearman value: 82.51802536005597 - type: manhattan_pearson value: 84.76736312526177 - type: manhattan_spearman value: 82.50799656335593 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.40486308108694 - type: cos_sim_spearman value: 77.12670500926937 - type: euclidean_pearson value: 85.23836845503847 - type: euclidean_spearman value: 78.41475117006176 - type: manhattan_pearson value: 85.24302039610805 - type: manhattan_spearman value: 78.4053162562707 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.83570289087565 - type: cos_sim_spearman value: 89.28563503553643 - type: euclidean_pearson value: 87.77516003996445 - type: euclidean_spearman value: 88.8656149534085 - type: manhattan_pearson value: 87.75568872417946 - type: manhattan_spearman value: 88.80445489340585 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.776406555485 - type: cos_sim_spearman value: 83.8288465070091 - type: euclidean_pearson value: 85.37827999808123 - type: euclidean_spearman value: 84.11079529992739 - type: manhattan_pearson value: 85.35336495689121 - type: manhattan_spearman value: 84.08618492649347 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.57644404820684 - type: cos_sim_spearman value: 89.69728364350713 - type: euclidean_pearson value: 88.28202320389443 - type: euclidean_spearman value: 88.9560567319321 - type: manhattan_pearson value: 88.29461100044172 - type: manhattan_spearman value: 88.96030920678558 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.05211938460621 - type: cos_sim_spearman value: 86.43413865667489 - type: euclidean_pearson value: 85.62760689259562 - type: euclidean_spearman value: 86.28867831982394 - type: manhattan_pearson value: 85.60828879163458 - type: manhattan_spearman value: 86.27823731462473 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.00254140466377 - type: cos_sim_spearman value: 89.66118745178284 - type: euclidean_pearson value: 89.46985446236553 - type: euclidean_spearman value: 88.92649032371526 - type: manhattan_pearson value: 89.49600028180247 - type: manhattan_spearman value: 88.86948431519099 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 68.93578321067938 - type: cos_sim_spearman value: 69.60639595839257 - type: euclidean_pearson value: 70.33485090574897 - type: euclidean_spearman value: 69.03380379185452 - type: manhattan_pearson value: 70.42097254943839 - type: manhattan_spearman value: 69.25296348304255 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.29588700755069 - type: cos_sim_spearman value: 88.30389489193672 - type: euclidean_pearson value: 87.60349838180346 - type: euclidean_spearman value: 87.91041868311692 - type: manhattan_pearson value: 87.59373630607907 - type: manhattan_spearman value: 87.88690174001724 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.8030655700857 - type: mrr value: 96.3950637234951 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 60.028000000000006 - type: map_at_10 value: 69.855 - type: map_at_100 value: 70.257 - type: map_at_1000 value: 70.283 - type: map_at_3 value: 66.769 - type: map_at_5 value: 68.679 - type: mrr_at_1 value: 62.666999999999994 - type: mrr_at_10 value: 70.717 - type: mrr_at_100 value: 71.00800000000001 - type: mrr_at_1000 value: 71.033 - type: mrr_at_3 value: 68.389 - type: mrr_at_5 value: 69.939 - type: ndcg_at_1 value: 62.666999999999994 - type: ndcg_at_10 value: 74.715 - type: ndcg_at_100 value: 76.364 - type: ndcg_at_1000 value: 76.89399999999999 - type: ndcg_at_3 value: 69.383 - type: ndcg_at_5 value: 72.322 - type: precision_at_1 value: 62.666999999999994 - type: precision_at_10 value: 10.067 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.267 - type: recall_at_1 value: 60.028000000000006 - type: recall_at_10 value: 88.822 - type: recall_at_100 value: 96.167 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 74.367 - type: recall_at_5 value: 81.661 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.84554455445544 - type: cos_sim_ap value: 96.54482863244152 - type: cos_sim_f1 value: 92.13709677419355 - type: cos_sim_precision value: 92.88617886178862 - type: cos_sim_recall value: 91.4 - type: dot_accuracy value: 99.76039603960396 - type: dot_ap value: 93.20115278887057 - type: dot_f1 value: 87.92079207920793 - type: dot_precision value: 87.05882352941177 - type: dot_recall value: 88.8 - type: euclidean_accuracy value: 99.84950495049505 - type: euclidean_ap value: 96.53268343961348 - type: euclidean_f1 value: 92.23697650663942 - type: euclidean_precision value: 94.258872651357 - type: euclidean_recall value: 90.3 - type: manhattan_accuracy value: 99.85346534653465 - type: manhattan_ap value: 96.54495433438355 - type: manhattan_f1 value: 92.51012145748987 - type: manhattan_precision value: 93.64754098360656 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85346534653465 - type: max_ap value: 96.54495433438355 - type: max_f1 value: 92.51012145748987 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.46940443952006 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.396194493841584 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.881717673695555 - type: mrr value: 55.73439224174519 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.438177268254087 - type: cos_sim_spearman value: 30.96177698848688 - type: dot_pearson value: 30.513850376431435 - type: dot_spearman value: 29.932421046509706 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.21 - type: map_at_10 value: 1.727 - type: map_at_100 value: 9.881 - type: map_at_1000 value: 24.245 - type: map_at_3 value: 0.615 - type: map_at_5 value: 0.966 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 87.333 - type: mrr_at_100 value: 87.333 - type: mrr_at_1000 value: 87.333 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 87.333 - type: ndcg_at_1 value: 74.0 - type: ndcg_at_10 value: 69.12700000000001 - type: ndcg_at_100 value: 53.893 - type: ndcg_at_1000 value: 49.639 - type: ndcg_at_3 value: 74.654 - type: ndcg_at_5 value: 73.232 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 72.8 - type: precision_at_100 value: 55.42 - type: precision_at_1000 value: 21.73 - type: precision_at_3 value: 79.333 - type: precision_at_5 value: 77.2 - type: recall_at_1 value: 0.21 - type: recall_at_10 value: 1.9709999999999999 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.961999999999996 - type: recall_at_3 value: 0.66 - type: recall_at_5 value: 1.052 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.456 - type: map_at_10 value: 9.426 - type: map_at_100 value: 16.066 - type: map_at_1000 value: 17.652 - type: map_at_3 value: 5.2459999999999996 - type: map_at_5 value: 6.5360000000000005 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 47.666 - type: mrr_at_100 value: 48.681999999999995 - type: mrr_at_1000 value: 48.681999999999995 - type: mrr_at_3 value: 43.878 - type: mrr_at_5 value: 46.224 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 23.454 - type: ndcg_at_100 value: 36.616 - type: ndcg_at_1000 value: 48.596000000000004 - type: ndcg_at_3 value: 28.267999999999997 - type: ndcg_at_5 value: 25.630999999999997 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 20.204 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 1.5709999999999997 - type: precision_at_3 value: 29.252 - type: precision_at_5 value: 24.898 - type: recall_at_1 value: 2.456 - type: recall_at_10 value: 14.951 - type: recall_at_100 value: 48.399 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 6.1370000000000005 - type: recall_at_5 value: 8.671 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.86240000000001 - type: ap value: 14.678570078747494 - type: f1 value: 55.295967793934445 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.17374080362195 - type: f1 value: 59.54410874861454 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.91227822485289 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.12523097097217 - type: cos_sim_ap value: 77.59606075943269 - type: cos_sim_f1 value: 71.11395646606915 - type: cos_sim_precision value: 69.07960199004975 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 84.68736961316088 - type: dot_ap value: 68.47167450741459 - type: dot_f1 value: 64.42152354914874 - type: dot_precision value: 60.887949260042284 - type: dot_recall value: 68.3905013192612 - type: euclidean_accuracy value: 86.88084878106932 - type: euclidean_ap value: 77.27351204978599 - type: euclidean_f1 value: 70.99179716629381 - type: euclidean_precision value: 67.10526315789474 - type: euclidean_recall value: 75.35620052770449 - type: manhattan_accuracy value: 86.83316445133218 - type: manhattan_ap value: 77.21835357308716 - type: manhattan_f1 value: 71.05587004676349 - type: manhattan_precision value: 66.58210332103322 - type: manhattan_recall value: 76.17414248021109 - type: max_accuracy value: 87.12523097097217 - type: max_ap value: 77.59606075943269 - type: max_f1 value: 71.11395646606915 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.97232894787906 - type: cos_sim_ap value: 85.9613736469497 - type: cos_sim_f1 value: 78.40216655382532 - type: cos_sim_precision value: 72.97512437810946 - type: cos_sim_recall value: 84.70126270403449 - type: dot_accuracy value: 88.04866689952264 - type: dot_ap value: 83.15465089499936 - type: dot_f1 value: 76.32698287879329 - type: dot_precision value: 71.23223697378077 - type: dot_recall value: 82.20665229442562 - type: euclidean_accuracy value: 88.67543757519307 - type: euclidean_ap value: 85.4524355531532 - type: euclidean_f1 value: 77.78729106950081 - type: euclidean_precision value: 75.3009009009009 - type: euclidean_recall value: 80.44348629504158 - type: manhattan_accuracy value: 88.65991384328792 - type: manhattan_ap value: 85.43109069046837 - type: manhattan_f1 value: 77.72639551396425 - type: manhattan_precision value: 73.73402417962004 - type: manhattan_recall value: 82.17585463504774 - type: max_accuracy value: 88.97232894787906 - type: max_ap value: 85.9613736469497 - type: max_f1 value: 78.40216655382532 --- <h1 align="center">GIST Large Embedding v0</h1> *GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning* The model is fine-tuned on top of the [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task). The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions. Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829) # Data The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available: - Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets) - Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`). The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741). The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some. The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance. # Usage The model can be easily loaded using the Sentence Transformers library. ```Python import torch.nn.functional as F from sentence_transformers import SentenceTransformer revision = None # Replace with the specific revision to ensure reproducibility if the model is updated. model = SentenceTransformer("avsolatorio/GIST-large-Embedding-v0", revision=revision) texts = [ "Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.", "Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.", "As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes" ] # Compute embeddings embeddings = model.encode(texts, convert_to_tensor=True) # Compute cosine-similarity for each pair of sentences scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1) print(scores.cpu().numpy()) ``` # Training Parameters Below are the training parameters used to fine-tune the model: ``` Epochs = 40 Warmup ratio = 0.1 Learning rate = 5e-6 Batch size = 16 Checkpoint step = 171000 Contrastive loss temperature = 0.01 ``` # Evaluation The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite. # Citation Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗 ``` @article{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, journal={arXiv preprint arXiv:2402.16829}, year={2024}, URL={https://arxiv.org/abs/2402.16829} eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` # Acknowledgements This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444. The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
stabilityai/stable-diffusion-2-depth
stabilityai
"2023-07-05T16:19:06Z"
366,848
382
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "diffusers:StableDiffusionDepth2ImgPipeline", "region:us" ]
null
"2022-11-23T17:41:46Z"
--- license: openrail++ tags: - stable-diffusion inference: false --- # Stable Diffusion v2 Model Card This model card focuses on the model associated with the Stable Diffusion v2 model, available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-depth` model is resumed from [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. ![image](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/depth2image.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-depth-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-depth/resolve/main/512-depth-ema.ckpt). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install -U git+https://github.com/huggingface/transformers.git pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler): ```python import torch import requests from PIL import Image from diffusers import StableDiffusionDepth2ImgPipeline pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-depth", torch_dtype=torch.float16, ).to("cuda") url = "http://images.cocodataset.org/val2017/000000039769.jpg" init_image = Image.open(requests.get(url, stream=True).raw) prompt = "two tigers" n_propmt = "bad, deformed, ugly, bad anotomy" image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
cross-encoder/stsb-distilroberta-base
cross-encoder
"2021-08-05T08:41:53Z"
366,803
4
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
cointegrated/LaBSE-en-ru
cointegrated
"2024-03-28T13:59:30Z"
365,299
43
transformers
[ "transformers", "pytorch", "tf", "safetensors", "bert", "pretraining", "feature-extraction", "embeddings", "sentence-similarity", "ru", "en", "arxiv:2007.01852", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: ["ru", "en"] tags: - feature-extraction - embeddings - sentence-similarity --- # LaBSE for English and Russian This is a truncated version of [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE), which is, in turn, a port of [LaBSE](https://tfhub.dev/google/LaBSE/1) by Google. The current model has only English and Russian tokens left in the vocabulary. Thus, the vocabulary is 10% of the original, and number of parameters in the whole model is 27% of the original, without any loss in the quality of English and Russian embeddings. To get the sentence embeddings, you can use the following code: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/LaBSE-en-ru") model = AutoModel.from_pretrained("cointegrated/LaBSE-en-ru") sentences = ["Hello World", "Привет Мир"] encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = model_output.pooler_output embeddings = torch.nn.functional.normalize(embeddings) print(embeddings) ``` The model has been truncated in [this notebook](https://colab.research.google.com/drive/1dnPRn0-ugj3vZgSpyCC9sgslM2SuSfHy?usp=sharing). You can adapt it for other languages (like [EIStakovskii/LaBSE-fr-de](https://huggingface.co/EIStakovskii/LaBSE-fr-de)), models or datasets. ## Reference: Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. [Language-agnostic BERT Sentence Embedding](https://arxiv.org/abs/2007.01852). July 2020 License: [https://tfhub.dev/google/LaBSE/1](https://tfhub.dev/google/LaBSE/1)
peft-internal-testing/tiny-clip-text-2
peft-internal-testing
"2024-03-06T10:52:08Z"
363,970
1
transformers
[ "transformers", "pytorch", "safetensors", "clip_text_model", "endpoints_compatible", "region:us" ]
null
"2023-09-20T16:25:32Z"
Entry not found
pdelobelle/robbert-v2-dutch-ner
pdelobelle
"2022-08-01T14:49:07Z"
363,145
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "token-classification", "Dutch", "Flemish", "RoBERTa", "RobBERT", "nl", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT license: mit datasets: - oscar - oscar (NL) - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Mijn naam is RobBERT en ik ben een taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT: Dutch RoBERTa-based Language Model. [RobBERT](https://github.com/iPieter/RobBERT) is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many [researchers](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=7180110604335112086) and [practitioners](https://huggingface.co/models?search=robbert) for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks,
TechxGenus/starcoder2-15b-GPTQ
TechxGenus
"2024-03-22T17:04:34Z"
362,803
2
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "code", "dataset:bigcode/the-stack-v2-train", "arxiv:2305.13245", "arxiv:2205.14135", "arxiv:2004.05150", "arxiv:2207.14255", "arxiv:2402.19173", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
"2024-03-22T14:47:54Z"
--- pipeline_tag: text-generation inference: parameters: temperature: 0.2 top_p: 0.95 widget: - text: 'def print_hello_world():' example_title: Hello world group: Python datasets: - bigcode/the-stack-v2-train license: bigcode-openrail-m library_name: transformers tags: - code model-index: - name: starcoder2-15b results: - task: type: text-generation dataset: name: CruxEval-I type: cruxeval-i metrics: - type: pass@1 value: 48.1 - task: type: text-generation dataset: name: DS-1000 type: ds-1000 metrics: - type: pass@1 value: 33.8 - task: type: text-generation dataset: name: GSM8K (PAL) type: gsm8k-pal metrics: - type: accuracy value: 65.1 - task: type: text-generation dataset: name: HumanEval+ type: humanevalplus metrics: - type: pass@1 value: 37.8 - task: type: text-generation dataset: name: HumanEval type: humaneval metrics: - type: pass@1 value: 46.3 - task: type: text-generation dataset: name: RepoBench-v1.1 type: repobench-v1.1 metrics: - type: edit-smiliarity value: 74.08 --- # StarCoder2 <center> <img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/starcoder2_banner.png" alt="SC2" width="900" height="600"> </center> GPTQ quantized version of starcoder2-15b model. --- ## Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [License](#license) 6. [Citation](#citation) ## Model Summary StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 4+ trillion tokens. The model was trained with [NVIDIA NeMo™ Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/) using the [NVIDIA Eos Supercomputer](https://blogs.nvidia.com/blog/eos/) built with [NVIDIA DGX H100](https://www.nvidia.com/en-us/data-center/dgx-h100/) systems. - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org) - **Paper:** [Link](https://huggingface.co/papers/2402.19173) - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org) - **Languages:** 600+ Programming languages ## Use ### Intended use The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. ### Generation Here are some examples to get started with the model. You can find a script for fine-tuning in StarCoder2's [GitHub repository](https://github.com/bigcode-project/starcoder2). First, make sure to install `transformers` from source: ```bash pip install git+https://github.com/huggingface/transformers.git ``` #### Running the model on CPU/GPU/multi GPU * _Using full precision_ ```python # pip install git+https://github.com/huggingface/transformers.git # TODO: merge PR to main from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigcode/starcoder2-15b" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")` model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM checkpoint = "bigcode/starcoder2-15b" tokenizer = AutoTokenizer.from_pretrained(checkpoint) # for fp16 use `torch_dtype=torch.float16` instead model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 32251.33 MB ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # to use 4bit use `load_in_4bit=True` instead quantization_config = BitsAndBytesConfig(load_in_8bit=True) checkpoint = "bigcode/starcoder2-15b" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config) inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ```bash >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") # load_in_8bit Memory footprint: 16900.18 MB # load_in_4bit >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB") Memory footprint: 9224.60 MB ``` ### Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code. # Limitations The model has been trained on source code from 600+ programming languages. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations. # Training ## Model - **Architecture:** Transformer decoder with grouped-query and sliding window attention and Fill-in-the-Middle objective - **Pretraining steps:** 1 million - **Pretraining tokens:** 4+ trillion - **Precision:** bfloat16 ## Hardware - **GPUs:** 1024 x H100 ## Software - **Framework:** [NeMo Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) # License The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement). # Citation ```bash @misc{lozhkov2024starcoder, title={StarCoder 2 and The Stack v2: The Next Generation}, author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries}, year={2024}, eprint={2402.19173}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
facebook/mask2former-swin-large-cityscapes-semantic
facebook
"2023-09-07T15:38:57Z"
358,112
18
transformers
[ "transformers", "pytorch", "safetensors", "mask2former", "vision", "image-segmentation", "dataset:coco", "arxiv:2112.01527", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2023-01-05T00:18:47Z"
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # Mask2Former Mask2Former model trained on Cityscapes semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python import requests import torch from PIL import Image from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation # load Mask2Former fine-tuned on Cityscapes semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
TaylorAI/bge-micro-v2
TaylorAI
"2024-06-06T22:44:08Z"
357,767
35
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-10-11T05:55:09Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge_micro results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 67.76119402985074 - type: ap value: 29.637849284211114 - type: f1 value: 61.31181187111905 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 79.7547 - type: ap value: 74.21401629809145 - type: f1 value: 79.65319615433783 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.452000000000005 - type: f1 value: 37.0245198854966 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 31.152 - type: map_at_10 value: 46.702 - type: map_at_100 value: 47.563 - type: map_at_1000 value: 47.567 - type: map_at_3 value: 42.058 - type: map_at_5 value: 44.608 - type: mrr_at_1 value: 32.006 - type: mrr_at_10 value: 47.064 - type: mrr_at_100 value: 47.910000000000004 - type: mrr_at_1000 value: 47.915 - type: mrr_at_3 value: 42.283 - type: mrr_at_5 value: 44.968 - type: ndcg_at_1 value: 31.152 - type: ndcg_at_10 value: 55.308 - type: ndcg_at_100 value: 58.965 - type: ndcg_at_1000 value: 59.067 - type: ndcg_at_3 value: 45.698 - type: ndcg_at_5 value: 50.296 - type: precision_at_1 value: 31.152 - type: precision_at_10 value: 8.279 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.753 - type: precision_at_5 value: 13.485 - type: recall_at_1 value: 31.152 - type: recall_at_10 value: 82.788 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 56.259 - type: recall_at_5 value: 67.425 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.52692241938116 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 33.245710292773595 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.08493637155168 - type: mrr value: 71.94378490084861 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.1602804378326 - type: cos_sim_spearman value: 82.92478106365587 - type: euclidean_pearson value: 82.27930167277077 - type: euclidean_spearman value: 82.18560759458093 - type: manhattan_pearson value: 82.34277425888187 - type: manhattan_spearman value: 81.72776583704467 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.17207792207792 - type: f1 value: 81.09893836310513 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.109308463095516 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.06048212317168 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.233999999999998 - type: map_at_10 value: 38.092999999999996 - type: map_at_100 value: 39.473 - type: map_at_1000 value: 39.614 - type: map_at_3 value: 34.839 - type: map_at_5 value: 36.523 - type: mrr_at_1 value: 35.193000000000005 - type: mrr_at_10 value: 44.089 - type: mrr_at_100 value: 44.927 - type: mrr_at_1000 value: 44.988 - type: mrr_at_3 value: 41.559000000000005 - type: mrr_at_5 value: 43.162 - type: ndcg_at_1 value: 35.193000000000005 - type: ndcg_at_10 value: 44.04 - type: ndcg_at_100 value: 49.262 - type: ndcg_at_1000 value: 51.847 - type: ndcg_at_3 value: 39.248 - type: ndcg_at_5 value: 41.298 - type: precision_at_1 value: 35.193000000000005 - type: precision_at_10 value: 8.555 - type: precision_at_100 value: 1.3820000000000001 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 19.123 - type: precision_at_5 value: 13.648 - type: recall_at_1 value: 28.233999999999998 - type: recall_at_10 value: 55.094 - type: recall_at_100 value: 76.85300000000001 - type: recall_at_1000 value: 94.163 - type: recall_at_3 value: 40.782000000000004 - type: recall_at_5 value: 46.796 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.538 - type: map_at_10 value: 28.449 - type: map_at_100 value: 29.471000000000004 - type: map_at_1000 value: 29.599999999999998 - type: map_at_3 value: 26.371 - type: map_at_5 value: 27.58 - type: mrr_at_1 value: 26.815 - type: mrr_at_10 value: 33.331 - type: mrr_at_100 value: 34.114 - type: mrr_at_1000 value: 34.182 - type: mrr_at_3 value: 31.561 - type: mrr_at_5 value: 32.608 - type: ndcg_at_1 value: 26.815 - type: ndcg_at_10 value: 32.67 - type: ndcg_at_100 value: 37.039 - type: ndcg_at_1000 value: 39.769 - type: ndcg_at_3 value: 29.523 - type: ndcg_at_5 value: 31.048 - type: precision_at_1 value: 26.815 - type: precision_at_10 value: 5.955 - type: precision_at_100 value: 1.02 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 14.033999999999999 - type: precision_at_5 value: 9.911 - type: recall_at_1 value: 21.538 - type: recall_at_10 value: 40.186 - type: recall_at_100 value: 58.948 - type: recall_at_1000 value: 77.158 - type: recall_at_3 value: 30.951 - type: recall_at_5 value: 35.276 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 35.211999999999996 - type: map_at_10 value: 46.562 - type: map_at_100 value: 47.579 - type: map_at_1000 value: 47.646 - type: map_at_3 value: 43.485 - type: map_at_5 value: 45.206 - type: mrr_at_1 value: 40.627 - type: mrr_at_10 value: 49.928 - type: mrr_at_100 value: 50.647 - type: mrr_at_1000 value: 50.685 - type: mrr_at_3 value: 47.513 - type: mrr_at_5 value: 48.958 - type: ndcg_at_1 value: 40.627 - type: ndcg_at_10 value: 52.217 - type: ndcg_at_100 value: 56.423 - type: ndcg_at_1000 value: 57.821999999999996 - type: ndcg_at_3 value: 46.949000000000005 - type: ndcg_at_5 value: 49.534 - type: precision_at_1 value: 40.627 - type: precision_at_10 value: 8.476 - type: precision_at_100 value: 1.15 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 21.003 - type: precision_at_5 value: 14.469999999999999 - type: recall_at_1 value: 35.211999999999996 - type: recall_at_10 value: 65.692 - type: recall_at_100 value: 84.011 - type: recall_at_1000 value: 94.03099999999999 - type: recall_at_3 value: 51.404 - type: recall_at_5 value: 57.882 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.09 - type: map_at_10 value: 29.516 - type: map_at_100 value: 30.462 - type: map_at_1000 value: 30.56 - type: map_at_3 value: 26.945000000000004 - type: map_at_5 value: 28.421999999999997 - type: mrr_at_1 value: 23.616 - type: mrr_at_10 value: 31.221 - type: mrr_at_100 value: 32.057 - type: mrr_at_1000 value: 32.137 - type: mrr_at_3 value: 28.738000000000003 - type: mrr_at_5 value: 30.156 - type: ndcg_at_1 value: 23.616 - type: ndcg_at_10 value: 33.97 - type: ndcg_at_100 value: 38.806000000000004 - type: ndcg_at_1000 value: 41.393 - type: ndcg_at_3 value: 28.908 - type: ndcg_at_5 value: 31.433 - type: precision_at_1 value: 23.616 - type: precision_at_10 value: 5.299 - type: precision_at_100 value: 0.812 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 12.015 - type: precision_at_5 value: 8.701 - type: recall_at_1 value: 22.09 - type: recall_at_10 value: 46.089999999999996 - type: recall_at_100 value: 68.729 - type: recall_at_1000 value: 88.435 - type: recall_at_3 value: 32.584999999999994 - type: recall_at_5 value: 38.550000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.469 - type: map_at_10 value: 22.436 - type: map_at_100 value: 23.465 - type: map_at_1000 value: 23.608999999999998 - type: map_at_3 value: 19.716 - type: map_at_5 value: 21.182000000000002 - type: mrr_at_1 value: 18.905 - type: mrr_at_10 value: 26.55 - type: mrr_at_100 value: 27.46 - type: mrr_at_1000 value: 27.553 - type: mrr_at_3 value: 23.921999999999997 - type: mrr_at_5 value: 25.302999999999997 - type: ndcg_at_1 value: 18.905 - type: ndcg_at_10 value: 27.437 - type: ndcg_at_100 value: 32.555 - type: ndcg_at_1000 value: 35.885 - type: ndcg_at_3 value: 22.439 - type: ndcg_at_5 value: 24.666 - type: precision_at_1 value: 18.905 - type: precision_at_10 value: 5.2490000000000006 - type: precision_at_100 value: 0.889 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 10.862 - type: precision_at_5 value: 8.085 - type: recall_at_1 value: 15.469 - type: recall_at_10 value: 38.706 - type: recall_at_100 value: 61.242 - type: recall_at_1000 value: 84.84 - type: recall_at_3 value: 24.973 - type: recall_at_5 value: 30.603 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.918000000000003 - type: map_at_10 value: 34.296 - type: map_at_100 value: 35.632000000000005 - type: map_at_1000 value: 35.748999999999995 - type: map_at_3 value: 31.304 - type: map_at_5 value: 33.166000000000004 - type: mrr_at_1 value: 30.703000000000003 - type: mrr_at_10 value: 39.655 - type: mrr_at_100 value: 40.569 - type: mrr_at_1000 value: 40.621 - type: mrr_at_3 value: 37.023 - type: mrr_at_5 value: 38.664 - type: ndcg_at_1 value: 30.703000000000003 - type: ndcg_at_10 value: 39.897 - type: ndcg_at_100 value: 45.777 - type: ndcg_at_1000 value: 48.082 - type: ndcg_at_3 value: 35.122 - type: ndcg_at_5 value: 37.691 - type: precision_at_1 value: 30.703000000000003 - type: precision_at_10 value: 7.305000000000001 - type: precision_at_100 value: 1.208 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 16.811 - type: precision_at_5 value: 12.203999999999999 - type: recall_at_1 value: 24.918000000000003 - type: recall_at_10 value: 51.31 - type: recall_at_100 value: 76.534 - type: recall_at_1000 value: 91.911 - type: recall_at_3 value: 37.855 - type: recall_at_5 value: 44.493 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.416 - type: map_at_10 value: 30.474 - type: map_at_100 value: 31.759999999999998 - type: map_at_1000 value: 31.891000000000002 - type: map_at_3 value: 27.728 - type: map_at_5 value: 29.247 - type: mrr_at_1 value: 28.881 - type: mrr_at_10 value: 36.418 - type: mrr_at_100 value: 37.347 - type: mrr_at_1000 value: 37.415 - type: mrr_at_3 value: 33.942 - type: mrr_at_5 value: 35.386 - type: ndcg_at_1 value: 28.881 - type: ndcg_at_10 value: 35.812 - type: ndcg_at_100 value: 41.574 - type: ndcg_at_1000 value: 44.289 - type: ndcg_at_3 value: 31.239 - type: ndcg_at_5 value: 33.302 - type: precision_at_1 value: 28.881 - type: precision_at_10 value: 6.598 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 14.954 - type: precision_at_5 value: 10.776 - type: recall_at_1 value: 22.416 - type: recall_at_10 value: 46.243 - type: recall_at_100 value: 71.352 - type: recall_at_1000 value: 90.034 - type: recall_at_3 value: 32.873000000000005 - type: recall_at_5 value: 38.632 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.528166666666667 - type: map_at_10 value: 30.317833333333333 - type: map_at_100 value: 31.44108333333333 - type: map_at_1000 value: 31.566666666666666 - type: map_at_3 value: 27.84425 - type: map_at_5 value: 29.233333333333334 - type: mrr_at_1 value: 26.75733333333333 - type: mrr_at_10 value: 34.24425 - type: mrr_at_100 value: 35.11375 - type: mrr_at_1000 value: 35.184333333333335 - type: mrr_at_3 value: 32.01225 - type: mrr_at_5 value: 33.31225 - type: ndcg_at_1 value: 26.75733333333333 - type: ndcg_at_10 value: 35.072583333333334 - type: ndcg_at_100 value: 40.13358333333334 - type: ndcg_at_1000 value: 42.81825 - type: ndcg_at_3 value: 30.79275000000001 - type: ndcg_at_5 value: 32.822 - type: precision_at_1 value: 26.75733333333333 - type: precision_at_10 value: 6.128083333333334 - type: precision_at_100 value: 1.019 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.129916666666665 - type: precision_at_5 value: 10.087416666666668 - type: recall_at_1 value: 22.528166666666667 - type: recall_at_10 value: 45.38341666666667 - type: recall_at_100 value: 67.81791666666668 - type: recall_at_1000 value: 86.71716666666666 - type: recall_at_3 value: 33.38741666666667 - type: recall_at_5 value: 38.62041666666667 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.975 - type: map_at_10 value: 28.144999999999996 - type: map_at_100 value: 28.994999999999997 - type: map_at_1000 value: 29.086000000000002 - type: map_at_3 value: 25.968999999999998 - type: map_at_5 value: 27.321 - type: mrr_at_1 value: 25 - type: mrr_at_10 value: 30.822 - type: mrr_at_100 value: 31.647 - type: mrr_at_1000 value: 31.712 - type: mrr_at_3 value: 28.860000000000003 - type: mrr_at_5 value: 30.041 - type: ndcg_at_1 value: 25 - type: ndcg_at_10 value: 31.929999999999996 - type: ndcg_at_100 value: 36.258 - type: ndcg_at_1000 value: 38.682 - type: ndcg_at_3 value: 27.972 - type: ndcg_at_5 value: 30.089 - type: precision_at_1 value: 25 - type: precision_at_10 value: 4.923 - type: precision_at_100 value: 0.767 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 11.860999999999999 - type: precision_at_5 value: 8.466 - type: recall_at_1 value: 21.975 - type: recall_at_10 value: 41.102 - type: recall_at_100 value: 60.866 - type: recall_at_1000 value: 78.781 - type: recall_at_3 value: 30.268 - type: recall_at_5 value: 35.552 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.845999999999998 - type: map_at_10 value: 21.861 - type: map_at_100 value: 22.798 - type: map_at_1000 value: 22.925 - type: map_at_3 value: 19.922 - type: map_at_5 value: 21.054000000000002 - type: mrr_at_1 value: 19.098000000000003 - type: mrr_at_10 value: 25.397 - type: mrr_at_100 value: 26.246000000000002 - type: mrr_at_1000 value: 26.33 - type: mrr_at_3 value: 23.469 - type: mrr_at_5 value: 24.646 - type: ndcg_at_1 value: 19.098000000000003 - type: ndcg_at_10 value: 25.807999999999996 - type: ndcg_at_100 value: 30.445 - type: ndcg_at_1000 value: 33.666000000000004 - type: ndcg_at_3 value: 22.292 - type: ndcg_at_5 value: 24.075 - type: precision_at_1 value: 19.098000000000003 - type: precision_at_10 value: 4.58 - type: precision_at_100 value: 0.8099999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 10.346 - type: precision_at_5 value: 7.542999999999999 - type: recall_at_1 value: 15.845999999999998 - type: recall_at_10 value: 34.172999999999995 - type: recall_at_100 value: 55.24099999999999 - type: recall_at_1000 value: 78.644 - type: recall_at_3 value: 24.401 - type: recall_at_5 value: 28.938000000000002 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.974 - type: map_at_10 value: 30.108 - type: map_at_100 value: 31.208000000000002 - type: map_at_1000 value: 31.330999999999996 - type: map_at_3 value: 27.889999999999997 - type: map_at_5 value: 29.023 - type: mrr_at_1 value: 26.493 - type: mrr_at_10 value: 33.726 - type: mrr_at_100 value: 34.622 - type: mrr_at_1000 value: 34.703 - type: mrr_at_3 value: 31.575999999999997 - type: mrr_at_5 value: 32.690999999999995 - type: ndcg_at_1 value: 26.493 - type: ndcg_at_10 value: 34.664 - type: ndcg_at_100 value: 39.725 - type: ndcg_at_1000 value: 42.648 - type: ndcg_at_3 value: 30.447999999999997 - type: ndcg_at_5 value: 32.145 - type: precision_at_1 value: 26.493 - type: precision_at_10 value: 5.7090000000000005 - type: precision_at_100 value: 0.9199999999999999 - type: precision_at_1000 value: 0.129 - type: precision_at_3 value: 13.464 - type: precision_at_5 value: 9.384 - type: recall_at_1 value: 22.974 - type: recall_at_10 value: 45.097 - type: recall_at_100 value: 66.908 - type: recall_at_1000 value: 87.495 - type: recall_at_3 value: 33.338 - type: recall_at_5 value: 37.499 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.408 - type: map_at_10 value: 29.580000000000002 - type: map_at_100 value: 31.145 - type: map_at_1000 value: 31.369000000000003 - type: map_at_3 value: 27.634999999999998 - type: map_at_5 value: 28.766000000000002 - type: mrr_at_1 value: 27.272999999999996 - type: mrr_at_10 value: 33.93 - type: mrr_at_100 value: 34.963 - type: mrr_at_1000 value: 35.031 - type: mrr_at_3 value: 32.016 - type: mrr_at_5 value: 33.221000000000004 - type: ndcg_at_1 value: 27.272999999999996 - type: ndcg_at_10 value: 33.993 - type: ndcg_at_100 value: 40.333999999999996 - type: ndcg_at_1000 value: 43.361 - type: ndcg_at_3 value: 30.918 - type: ndcg_at_5 value: 32.552 - type: precision_at_1 value: 27.272999999999996 - type: precision_at_10 value: 6.285 - type: precision_at_100 value: 1.389 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 14.427000000000001 - type: precision_at_5 value: 10.356 - type: recall_at_1 value: 22.408 - type: recall_at_10 value: 41.318 - type: recall_at_100 value: 70.539 - type: recall_at_1000 value: 90.197 - type: recall_at_3 value: 32.513 - type: recall_at_5 value: 37 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.258000000000003 - type: map_at_10 value: 24.294 - type: map_at_100 value: 25.305 - type: map_at_1000 value: 25.419999999999998 - type: map_at_3 value: 22.326999999999998 - type: map_at_5 value: 23.31 - type: mrr_at_1 value: 18.484 - type: mrr_at_10 value: 25.863999999999997 - type: mrr_at_100 value: 26.766000000000002 - type: mrr_at_1000 value: 26.855 - type: mrr_at_3 value: 23.968 - type: mrr_at_5 value: 24.911 - type: ndcg_at_1 value: 18.484 - type: ndcg_at_10 value: 28.433000000000003 - type: ndcg_at_100 value: 33.405 - type: ndcg_at_1000 value: 36.375 - type: ndcg_at_3 value: 24.455 - type: ndcg_at_5 value: 26.031 - type: precision_at_1 value: 18.484 - type: precision_at_10 value: 4.603 - type: precision_at_100 value: 0.773 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 10.659 - type: precision_at_5 value: 7.505000000000001 - type: recall_at_1 value: 17.258000000000003 - type: recall_at_10 value: 39.589999999999996 - type: recall_at_100 value: 62.592000000000006 - type: recall_at_1000 value: 84.917 - type: recall_at_3 value: 28.706 - type: recall_at_5 value: 32.224000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.578999999999999 - type: map_at_10 value: 17.642 - type: map_at_100 value: 19.451 - type: map_at_1000 value: 19.647000000000002 - type: map_at_3 value: 14.618 - type: map_at_5 value: 16.145 - type: mrr_at_1 value: 23.322000000000003 - type: mrr_at_10 value: 34.204 - type: mrr_at_100 value: 35.185 - type: mrr_at_1000 value: 35.235 - type: mrr_at_3 value: 30.847 - type: mrr_at_5 value: 32.824 - type: ndcg_at_1 value: 23.322000000000003 - type: ndcg_at_10 value: 25.352999999999998 - type: ndcg_at_100 value: 32.574 - type: ndcg_at_1000 value: 36.073 - type: ndcg_at_3 value: 20.318 - type: ndcg_at_5 value: 22.111 - type: precision_at_1 value: 23.322000000000003 - type: precision_at_10 value: 8.02 - type: precision_at_100 value: 1.5730000000000002 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 15.049000000000001 - type: precision_at_5 value: 11.87 - type: recall_at_1 value: 10.578999999999999 - type: recall_at_10 value: 30.964999999999996 - type: recall_at_100 value: 55.986000000000004 - type: recall_at_1000 value: 75.565 - type: recall_at_3 value: 18.686 - type: recall_at_5 value: 23.629 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.327 - type: map_at_10 value: 14.904 - type: map_at_100 value: 20.29 - type: map_at_1000 value: 21.42 - type: map_at_3 value: 10.911 - type: map_at_5 value: 12.791 - type: mrr_at_1 value: 57.25 - type: mrr_at_10 value: 66.62700000000001 - type: mrr_at_100 value: 67.035 - type: mrr_at_1000 value: 67.052 - type: mrr_at_3 value: 64.833 - type: mrr_at_5 value: 65.908 - type: ndcg_at_1 value: 43.75 - type: ndcg_at_10 value: 32.246 - type: ndcg_at_100 value: 35.774 - type: ndcg_at_1000 value: 42.872 - type: ndcg_at_3 value: 36.64 - type: ndcg_at_5 value: 34.487 - type: precision_at_1 value: 57.25 - type: precision_at_10 value: 25.924999999999997 - type: precision_at_100 value: 7.670000000000001 - type: precision_at_1000 value: 1.599 - type: precision_at_3 value: 41.167 - type: precision_at_5 value: 34.65 - type: recall_at_1 value: 7.327 - type: recall_at_10 value: 19.625 - type: recall_at_100 value: 41.601 - type: recall_at_1000 value: 65.117 - type: recall_at_3 value: 12.308 - type: recall_at_5 value: 15.437999999999999 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.53 - type: f1 value: 39.39884255816736 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 58.913000000000004 - type: map_at_10 value: 69.592 - type: map_at_100 value: 69.95599999999999 - type: map_at_1000 value: 69.973 - type: map_at_3 value: 67.716 - type: map_at_5 value: 68.899 - type: mrr_at_1 value: 63.561 - type: mrr_at_10 value: 74.2 - type: mrr_at_100 value: 74.468 - type: mrr_at_1000 value: 74.47500000000001 - type: mrr_at_3 value: 72.442 - type: mrr_at_5 value: 73.58 - type: ndcg_at_1 value: 63.561 - type: ndcg_at_10 value: 74.988 - type: ndcg_at_100 value: 76.52799999999999 - type: ndcg_at_1000 value: 76.88000000000001 - type: ndcg_at_3 value: 71.455 - type: ndcg_at_5 value: 73.42699999999999 - type: precision_at_1 value: 63.561 - type: precision_at_10 value: 9.547 - type: precision_at_100 value: 1.044 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 28.143 - type: precision_at_5 value: 18.008 - type: recall_at_1 value: 58.913000000000004 - type: recall_at_10 value: 87.18 - type: recall_at_100 value: 93.852 - type: recall_at_1000 value: 96.256 - type: recall_at_3 value: 77.55199999999999 - type: recall_at_5 value: 82.42399999999999 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 11.761000000000001 - type: map_at_10 value: 19.564999999999998 - type: map_at_100 value: 21.099 - type: map_at_1000 value: 21.288999999999998 - type: map_at_3 value: 16.683999999999997 - type: map_at_5 value: 18.307000000000002 - type: mrr_at_1 value: 23.302 - type: mrr_at_10 value: 30.979 - type: mrr_at_100 value: 32.121 - type: mrr_at_1000 value: 32.186 - type: mrr_at_3 value: 28.549000000000003 - type: mrr_at_5 value: 30.038999999999998 - type: ndcg_at_1 value: 23.302 - type: ndcg_at_10 value: 25.592 - type: ndcg_at_100 value: 32.416 - type: ndcg_at_1000 value: 36.277 - type: ndcg_at_3 value: 22.151 - type: ndcg_at_5 value: 23.483999999999998 - type: precision_at_1 value: 23.302 - type: precision_at_10 value: 7.377000000000001 - type: precision_at_100 value: 1.415 - type: precision_at_1000 value: 0.212 - type: precision_at_3 value: 14.712 - type: precision_at_5 value: 11.358 - type: recall_at_1 value: 11.761000000000001 - type: recall_at_10 value: 31.696 - type: recall_at_100 value: 58.01500000000001 - type: recall_at_1000 value: 81.572 - type: recall_at_3 value: 20.742 - type: recall_at_5 value: 25.707 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 32.275 - type: map_at_10 value: 44.712 - type: map_at_100 value: 45.621 - type: map_at_1000 value: 45.698 - type: map_at_3 value: 42.016999999999996 - type: map_at_5 value: 43.659 - type: mrr_at_1 value: 64.551 - type: mrr_at_10 value: 71.58099999999999 - type: mrr_at_100 value: 71.952 - type: mrr_at_1000 value: 71.96900000000001 - type: mrr_at_3 value: 70.236 - type: mrr_at_5 value: 71.051 - type: ndcg_at_1 value: 64.551 - type: ndcg_at_10 value: 53.913999999999994 - type: ndcg_at_100 value: 57.421 - type: ndcg_at_1000 value: 59.06 - type: ndcg_at_3 value: 49.716 - type: ndcg_at_5 value: 51.971999999999994 - type: precision_at_1 value: 64.551 - type: precision_at_10 value: 11.110000000000001 - type: precision_at_100 value: 1.388 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 30.822 - type: precision_at_5 value: 20.273 - type: recall_at_1 value: 32.275 - type: recall_at_10 value: 55.55 - type: recall_at_100 value: 69.38600000000001 - type: recall_at_1000 value: 80.35799999999999 - type: recall_at_3 value: 46.232 - type: recall_at_5 value: 50.682 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 76.4604 - type: ap value: 70.40498168422701 - type: f1 value: 76.38572688476046 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 15.065999999999999 - type: map_at_10 value: 25.058000000000003 - type: map_at_100 value: 26.268 - type: map_at_1000 value: 26.344 - type: map_at_3 value: 21.626 - type: map_at_5 value: 23.513 - type: mrr_at_1 value: 15.501000000000001 - type: mrr_at_10 value: 25.548 - type: mrr_at_100 value: 26.723000000000003 - type: mrr_at_1000 value: 26.793 - type: mrr_at_3 value: 22.142 - type: mrr_at_5 value: 24.024 - type: ndcg_at_1 value: 15.501000000000001 - type: ndcg_at_10 value: 31.008000000000003 - type: ndcg_at_100 value: 37.08 - type: ndcg_at_1000 value: 39.102 - type: ndcg_at_3 value: 23.921999999999997 - type: ndcg_at_5 value: 27.307 - type: precision_at_1 value: 15.501000000000001 - type: precision_at_10 value: 5.155 - type: precision_at_100 value: 0.822 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 10.363 - type: precision_at_5 value: 7.917000000000001 - type: recall_at_1 value: 15.065999999999999 - type: recall_at_10 value: 49.507 - type: recall_at_100 value: 78.118 - type: recall_at_1000 value: 93.881 - type: recall_at_3 value: 30.075000000000003 - type: recall_at_5 value: 38.222 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.6703146374829 - type: f1 value: 90.1258004293966 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.29229366165072 - type: f1 value: 50.016194478997875 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.57767316745124 - type: f1 value: 67.16194062146954 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.92064559515804 - type: f1 value: 73.6680729569968 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.56335607367883 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.131807833734268 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.07390328719844 - type: mrr value: 32.117370992867905 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.274 - type: map_at_10 value: 11.489 - type: map_at_100 value: 14.518 - type: map_at_1000 value: 15.914 - type: map_at_3 value: 8.399 - type: map_at_5 value: 9.889000000000001 - type: mrr_at_1 value: 42.724000000000004 - type: mrr_at_10 value: 51.486 - type: mrr_at_100 value: 51.941 - type: mrr_at_1000 value: 51.99 - type: mrr_at_3 value: 49.278 - type: mrr_at_5 value: 50.485 - type: ndcg_at_1 value: 39.938 - type: ndcg_at_10 value: 31.862000000000002 - type: ndcg_at_100 value: 29.235 - type: ndcg_at_1000 value: 37.802 - type: ndcg_at_3 value: 35.754999999999995 - type: ndcg_at_5 value: 34.447 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 23.901 - type: precision_at_100 value: 7.715 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 33.437 - type: precision_at_5 value: 29.782999999999998 - type: recall_at_1 value: 5.274 - type: recall_at_10 value: 15.351 - type: recall_at_100 value: 29.791 - type: recall_at_1000 value: 60.722 - type: recall_at_3 value: 9.411 - type: recall_at_5 value: 12.171999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 16.099 - type: map_at_10 value: 27.913 - type: map_at_100 value: 29.281000000000002 - type: map_at_1000 value: 29.343999999999998 - type: map_at_3 value: 23.791 - type: map_at_5 value: 26.049 - type: mrr_at_1 value: 18.337 - type: mrr_at_10 value: 29.953999999999997 - type: mrr_at_100 value: 31.080999999999996 - type: mrr_at_1000 value: 31.130000000000003 - type: mrr_at_3 value: 26.168000000000003 - type: mrr_at_5 value: 28.277 - type: ndcg_at_1 value: 18.308 - type: ndcg_at_10 value: 34.938 - type: ndcg_at_100 value: 41.125 - type: ndcg_at_1000 value: 42.708 - type: ndcg_at_3 value: 26.805 - type: ndcg_at_5 value: 30.686999999999998 - type: precision_at_1 value: 18.308 - type: precision_at_10 value: 6.476999999999999 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 12.784999999999998 - type: precision_at_5 value: 9.878 - type: recall_at_1 value: 16.099 - type: recall_at_10 value: 54.63 - type: recall_at_100 value: 82.24900000000001 - type: recall_at_1000 value: 94.242 - type: recall_at_3 value: 33.174 - type: recall_at_5 value: 42.164 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 67.947 - type: map_at_10 value: 81.499 - type: map_at_100 value: 82.17 - type: map_at_1000 value: 82.194 - type: map_at_3 value: 78.567 - type: map_at_5 value: 80.34400000000001 - type: mrr_at_1 value: 78.18 - type: mrr_at_10 value: 85.05 - type: mrr_at_100 value: 85.179 - type: mrr_at_1000 value: 85.181 - type: mrr_at_3 value: 83.91 - type: mrr_at_5 value: 84.638 - type: ndcg_at_1 value: 78.2 - type: ndcg_at_10 value: 85.715 - type: ndcg_at_100 value: 87.2 - type: ndcg_at_1000 value: 87.39 - type: ndcg_at_3 value: 82.572 - type: ndcg_at_5 value: 84.176 - type: precision_at_1 value: 78.2 - type: precision_at_10 value: 12.973 - type: precision_at_100 value: 1.5010000000000001 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 35.949999999999996 - type: precision_at_5 value: 23.62 - type: recall_at_1 value: 67.947 - type: recall_at_10 value: 93.804 - type: recall_at_100 value: 98.971 - type: recall_at_1000 value: 99.91600000000001 - type: recall_at_3 value: 84.75399999999999 - type: recall_at_5 value: 89.32 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.457201684255104 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 55.162226937477875 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.173 - type: map_at_10 value: 10.463000000000001 - type: map_at_100 value: 12.278 - type: map_at_1000 value: 12.572 - type: map_at_3 value: 7.528 - type: map_at_5 value: 8.863 - type: mrr_at_1 value: 20.599999999999998 - type: mrr_at_10 value: 30.422 - type: mrr_at_100 value: 31.6 - type: mrr_at_1000 value: 31.663000000000004 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.065 - type: ndcg_at_1 value: 20.599999999999998 - type: ndcg_at_10 value: 17.687 - type: ndcg_at_100 value: 25.172 - type: ndcg_at_1000 value: 30.617 - type: ndcg_at_3 value: 16.81 - type: ndcg_at_5 value: 14.499 - type: precision_at_1 value: 20.599999999999998 - type: precision_at_10 value: 9.17 - type: precision_at_100 value: 2.004 - type: precision_at_1000 value: 0.332 - type: precision_at_3 value: 15.6 - type: precision_at_5 value: 12.58 - type: recall_at_1 value: 4.173 - type: recall_at_10 value: 18.575 - type: recall_at_100 value: 40.692 - type: recall_at_1000 value: 67.467 - type: recall_at_3 value: 9.488000000000001 - type: recall_at_5 value: 12.738 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 81.12603499315416 - type: cos_sim_spearman value: 73.62060290948378 - type: euclidean_pearson value: 78.14083565781135 - type: euclidean_spearman value: 73.16840437541543 - type: manhattan_pearson value: 77.92017261109734 - type: manhattan_spearman value: 72.8805059949965 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 79.75955377133172 - type: cos_sim_spearman value: 71.8872633964069 - type: euclidean_pearson value: 76.31922068538256 - type: euclidean_spearman value: 70.86449661855376 - type: manhattan_pearson value: 76.47852229730407 - type: manhattan_spearman value: 70.99367421984789 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 78.80762722908158 - type: cos_sim_spearman value: 79.84588978756372 - type: euclidean_pearson value: 79.8216849781164 - type: euclidean_spearman value: 80.22647061695481 - type: manhattan_pearson value: 79.56604194112572 - type: manhattan_spearman value: 79.96495189862462 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.1012718092742 - type: cos_sim_spearman value: 76.86011381793661 - type: euclidean_pearson value: 79.94426039862019 - type: euclidean_spearman value: 77.36751135465131 - type: manhattan_pearson value: 79.87959373304288 - type: manhattan_spearman value: 77.37717129004746 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 83.90618420346104 - type: cos_sim_spearman value: 84.77290791243722 - type: euclidean_pearson value: 84.64732258073293 - type: euclidean_spearman value: 85.21053649543357 - type: manhattan_pearson value: 84.61616883522647 - type: manhattan_spearman value: 85.19803126766931 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 80.52192114059063 - type: cos_sim_spearman value: 81.9103244827937 - type: euclidean_pearson value: 80.99375176138985 - type: euclidean_spearman value: 81.540250641079 - type: manhattan_pearson value: 80.84979573396426 - type: manhattan_spearman value: 81.3742591621492 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.82166001234197 - type: cos_sim_spearman value: 86.81857495659123 - type: euclidean_pearson value: 85.72798403202849 - type: euclidean_spearman value: 85.70482438950965 - type: manhattan_pearson value: 85.51579093130357 - type: manhattan_spearman value: 85.41233705379751 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.48071151079803 - type: cos_sim_spearman value: 65.37838108084044 - type: euclidean_pearson value: 64.67378947096257 - type: euclidean_spearman value: 65.39187147219869 - type: manhattan_pearson value: 65.35487466133208 - type: manhattan_spearman value: 65.51328499442272 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.64702367823314 - type: cos_sim_spearman value: 82.49732953181818 - type: euclidean_pearson value: 83.05996062475664 - type: euclidean_spearman value: 82.28159546751176 - type: manhattan_pearson value: 82.98305503664952 - type: manhattan_spearman value: 82.18405771943928 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.5744649318696 - type: mrr value: 93.35386291268645 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 52.093999999999994 - type: map_at_10 value: 61.646 - type: map_at_100 value: 62.197 - type: map_at_1000 value: 62.22800000000001 - type: map_at_3 value: 58.411 - type: map_at_5 value: 60.585 - type: mrr_at_1 value: 55.00000000000001 - type: mrr_at_10 value: 62.690999999999995 - type: mrr_at_100 value: 63.139 - type: mrr_at_1000 value: 63.166999999999994 - type: mrr_at_3 value: 60.111000000000004 - type: mrr_at_5 value: 61.778 - type: ndcg_at_1 value: 55.00000000000001 - type: ndcg_at_10 value: 66.271 - type: ndcg_at_100 value: 68.879 - type: ndcg_at_1000 value: 69.722 - type: ndcg_at_3 value: 60.672000000000004 - type: ndcg_at_5 value: 63.929 - type: precision_at_1 value: 55.00000000000001 - type: precision_at_10 value: 9 - type: precision_at_100 value: 1.043 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.555999999999997 - type: precision_at_5 value: 16.2 - type: recall_at_1 value: 52.093999999999994 - type: recall_at_10 value: 79.567 - type: recall_at_100 value: 91.60000000000001 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 64.633 - type: recall_at_5 value: 72.68299999999999 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.83267326732673 - type: cos_sim_ap value: 95.77995366495178 - type: cos_sim_f1 value: 91.51180311401306 - type: cos_sim_precision value: 91.92734611503532 - type: cos_sim_recall value: 91.10000000000001 - type: dot_accuracy value: 99.63366336633663 - type: dot_ap value: 88.53996286967461 - type: dot_f1 value: 81.06537530266343 - type: dot_precision value: 78.59154929577464 - type: dot_recall value: 83.7 - type: euclidean_accuracy value: 99.82376237623762 - type: euclidean_ap value: 95.53192209281187 - type: euclidean_f1 value: 91.19683481701286 - type: euclidean_precision value: 90.21526418786692 - type: euclidean_recall value: 92.2 - type: manhattan_accuracy value: 99.82376237623762 - type: manhattan_ap value: 95.55642082191741 - type: manhattan_f1 value: 91.16186693147964 - type: manhattan_precision value: 90.53254437869822 - type: manhattan_recall value: 91.8 - type: max_accuracy value: 99.83267326732673 - type: max_ap value: 95.77995366495178 - type: max_f1 value: 91.51180311401306 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 54.508462134213474 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.06549765184959 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.43129549466616 - type: mrr value: 50.20613169510227 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.069516173193044 - type: cos_sim_spearman value: 29.872498354017353 - type: dot_pearson value: 28.80761257516063 - type: dot_spearman value: 28.397422678527708 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.169 - type: map_at_10 value: 1.208 - type: map_at_100 value: 5.925 - type: map_at_1000 value: 14.427000000000001 - type: map_at_3 value: 0.457 - type: map_at_5 value: 0.716 - type: mrr_at_1 value: 64 - type: mrr_at_10 value: 74.075 - type: mrr_at_100 value: 74.303 - type: mrr_at_1000 value: 74.303 - type: mrr_at_3 value: 71 - type: mrr_at_5 value: 72.89999999999999 - type: ndcg_at_1 value: 57.99999999999999 - type: ndcg_at_10 value: 50.376 - type: ndcg_at_100 value: 38.582 - type: ndcg_at_1000 value: 35.663 - type: ndcg_at_3 value: 55.592 - type: ndcg_at_5 value: 53.647999999999996 - type: precision_at_1 value: 64 - type: precision_at_10 value: 53.2 - type: precision_at_100 value: 39.6 - type: precision_at_1000 value: 16.218 - type: precision_at_3 value: 59.333000000000006 - type: precision_at_5 value: 57.599999999999994 - type: recall_at_1 value: 0.169 - type: recall_at_10 value: 1.423 - type: recall_at_100 value: 9.049999999999999 - type: recall_at_1000 value: 34.056999999999995 - type: recall_at_3 value: 0.48700000000000004 - type: recall_at_5 value: 0.792 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.319 - type: map_at_10 value: 7.112 - type: map_at_100 value: 12.588 - type: map_at_1000 value: 14.056 - type: map_at_3 value: 2.8049999999999997 - type: map_at_5 value: 4.68 - type: mrr_at_1 value: 18.367 - type: mrr_at_10 value: 33.94 - type: mrr_at_100 value: 35.193000000000005 - type: mrr_at_1000 value: 35.193000000000005 - type: mrr_at_3 value: 29.932 - type: mrr_at_5 value: 32.279 - type: ndcg_at_1 value: 15.306000000000001 - type: ndcg_at_10 value: 18.096 - type: ndcg_at_100 value: 30.512 - type: ndcg_at_1000 value: 42.148 - type: ndcg_at_3 value: 17.034 - type: ndcg_at_5 value: 18.509 - type: precision_at_1 value: 18.367 - type: precision_at_10 value: 18.776 - type: precision_at_100 value: 7.02 - type: precision_at_1000 value: 1.467 - type: precision_at_3 value: 19.048000000000002 - type: precision_at_5 value: 22.041 - type: recall_at_1 value: 1.319 - type: recall_at_10 value: 13.748 - type: recall_at_100 value: 43.972 - type: recall_at_1000 value: 79.557 - type: recall_at_3 value: 4.042 - type: recall_at_5 value: 7.742 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.2282 - type: ap value: 13.995763859570426 - type: f1 value: 54.08126256731344 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 57.64006791171477 - type: f1 value: 57.95841320748957 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.19267841788564 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 83.96614412588663 - type: cos_sim_ap value: 67.75985678572738 - type: cos_sim_f1 value: 64.04661542276222 - type: cos_sim_precision value: 60.406922357343305 - type: cos_sim_recall value: 68.15303430079156 - type: dot_accuracy value: 79.5732252488526 - type: dot_ap value: 51.30562107572645 - type: dot_f1 value: 53.120759837177744 - type: dot_precision value: 46.478037198258804 - type: dot_recall value: 61.97889182058047 - type: euclidean_accuracy value: 84.00786791440663 - type: euclidean_ap value: 67.58930214486998 - type: euclidean_f1 value: 64.424821579775 - type: euclidean_precision value: 59.4817958454322 - type: euclidean_recall value: 70.26385224274406 - type: manhattan_accuracy value: 83.87673600762949 - type: manhattan_ap value: 67.4250981523309 - type: manhattan_f1 value: 64.10286658015808 - type: manhattan_precision value: 57.96885001066781 - type: manhattan_recall value: 71.68865435356201 - type: max_accuracy value: 84.00786791440663 - type: max_ap value: 67.75985678572738 - type: max_f1 value: 64.424821579775 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.41347459929368 - type: cos_sim_ap value: 84.89261930113058 - type: cos_sim_f1 value: 77.13677607258877 - type: cos_sim_precision value: 74.88581164358733 - type: cos_sim_recall value: 79.52725592854944 - type: dot_accuracy value: 86.32359219156285 - type: dot_ap value: 79.29794992131094 - type: dot_f1 value: 72.84356337679777 - type: dot_precision value: 67.31761478675462 - type: dot_recall value: 79.35786880197105 - type: euclidean_accuracy value: 88.33585593976791 - type: euclidean_ap value: 84.73257641312746 - type: euclidean_f1 value: 76.83529582788195 - type: euclidean_precision value: 72.76294052863436 - type: euclidean_recall value: 81.3905143209116 - type: manhattan_accuracy value: 88.3086894089339 - type: manhattan_ap value: 84.66304891729399 - type: manhattan_f1 value: 76.8181650632165 - type: manhattan_precision value: 73.6864436744219 - type: manhattan_recall value: 80.22790267939637 - type: max_accuracy value: 88.41347459929368 - type: max_ap value: 84.89261930113058 - type: max_f1 value: 77.13677607258877 license: mit --- # bge-micro-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Distilled in a 2-step training process (bge-micro was step 1) from `BAAI/bge-small-en-v1.5`. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tner/roberta-large-ontonotes5
tner
"2022-09-26T14:12:05Z"
356,401
16
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "dataset:tner/ontonotes5", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-08-12T10:33:41Z"
--- datasets: - tner/ontonotes5 metrics: - f1 - precision - recall model-index: - name: tner/roberta-large-ontonotes5 results: - task: name: Token Classification type: token-classification dataset: name: tner/ontonotes5 type: tner/ontonotes5 args: tner/ontonotes5 metrics: - name: F1 type: f1 value: 0.908632361399938 - name: Precision type: precision value: 0.905148095909732 - name: Recall type: recall value: 0.9121435551212579 - name: F1 (macro) type: f1_macro value: 0.8265477704565624 - name: Precision (macro) type: precision_macro value: 0.8170668848546687 - name: Recall (macro) type: recall_macro value: 0.8387672780349001 - name: F1 (entity span) type: f1_entity_span value: 0.9284544931640193 - name: Precision (entity span) type: precision_entity_span value: 0.9248942172073342 - name: Recall (entity span) type: recall_entity_span value: 0.9320422848005685 pipeline_tag: token-classification widget: - text: "Jacob Collier is a Grammy awarded artist from England." example_title: "NER Example 1" --- # tner/roberta-large-ontonotes5 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tner/ontonotes5](https://huggingface.co/datasets/tner/ontonotes5) dataset. Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository for more detail). It achieves the following results on the test set: - F1 (micro): 0.908632361399938 - Precision (micro): 0.905148095909732 - Recall (micro): 0.9121435551212579 - F1 (macro): 0.8265477704565624 - Precision (macro): 0.8170668848546687 - Recall (macro): 0.8387672780349001 The per-entity breakdown of the F1 score on the test set are below: - cardinal_number: 0.8605277329025309 - date: 0.872996300863132 - event: 0.7424242424242424 - facility: 0.7732342007434945 - geopolitical_area: 0.9687148323205043 - group: 0.9470588235294117 - language: 0.7499999999999999 - law: 0.6666666666666666 - location: 0.7593582887700535 - money: 0.901098901098901 - ordinal_number: 0.85785536159601 - organization: 0.9227360841872057 - percent: 0.9171428571428571 - person: 0.9556004036326943 - product: 0.7857142857142858 - quantity: 0.7945205479452055 - time: 0.6870588235294116 - work_of_art: 0.7151515151515151 For F1 scores, the confidence interval is obtained by bootstrap as below: - F1 (micro): - 90%: [0.9039454247544766, 0.9128956119702822] - 95%: [0.9030263216115454, 0.9138350859566045] - F1 (macro): - 90%: [0.9039454247544766, 0.9128956119702822] - 95%: [0.9030263216115454, 0.9138350859566045] Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/eval/metric.json) and [metric file of entity span](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/eval/metric_span.json). ### Usage This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip ```shell pip install tner ``` and activate model as below. ```python from tner import TransformersNER model = TransformersNER("tner/roberta-large-ontonotes5") model.predict(["Jacob Collier is a Grammy awarded English artist from London"]) ``` It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment. ### Training hyperparameters The following hyperparameters were used during training: - dataset: ['tner/ontonotes5'] - dataset_split: train - dataset_name: None - local_dataset: None - model: roberta-large - crf: True - max_length: 128 - epoch: 15 - batch_size: 64 - lr: 1e-05 - random_seed: 42 - gradient_accumulation_steps: 1 - weight_decay: None - lr_warmup_step_ratio: 0.1 - max_grad_norm: 10.0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-ontonotes5/raw/main/trainer_config.json). ### Reference If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-camacho-collados-2021-ner, title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition", author = "Ushio, Asahi and Camacho-Collados, Jose", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = apr, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.eacl-demos.7", doi = "10.18653/v1/2021.eacl-demos.7", pages = "53--62", abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.", } ```
apple/mobilevit-small
apple
"2022-08-29T07:57:51Z"
356,031
54
transformers
[ "transformers", "pytorch", "tf", "coreml", "mobilevit", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2110.02178", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-05-30T12:43:23Z"
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileViT (small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-small") model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes. ## Training procedure ### Preprocessing Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping. To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320). At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |------------------|-------------------------|-------------------------|-----------|-------------------------------------------------| | MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/apple/mobilevit-xx-small | | MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small | | **MobileViT-S** | **78.4** | **94.1** | **5.6 M** | https://huggingface.co/apple/mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
sentence-transformers/LaBSE
sentence-transformers
"2024-03-27T09:39:35Z"
353,805
224
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "multilingual", "af", "sq", "am", "ar", "hy", "as", "az", "eu", "be", "bn", "bs", "bg", "my", "ca", "ceb", "zh", "co", "hr", "cs", "da", "nl", "en", "eo", "et", "fi", "fr", "fy", "gl", "ka", "de", "el", "gu", "ht", "ha", "haw", "he", "hi", "hmn", "hu", "is", "ig", "id", "ga", "it", "ja", "jv", "kn", "kk", "km", "rw", "ko", "ku", "ky", "lo", "la", "lv", "lt", "lb", "mk", "mg", "ms", "ml", "mt", "mi", "mr", "mn", "ne", "no", "ny", "or", "fa", "pl", "pt", "pa", "ro", "ru", "sm", "gd", "sr", "st", "sn", "si", "sk", "sl", "so", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "th", "bo", "tr", "tk", "ug", "uk", "ur", "uz", "vi", "cy", "wo", "xh", "yi", "yo", "zu", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - multilingual - af - sq - am - ar - hy - as - az - eu - be - bn - bs - bg - my - ca - ceb - zh - co - hr - cs - da - nl - en - eo - et - fi - fr - fy - gl - ka - de - el - gu - ht - ha - haw - he - hi - hmn - hu - is - ig - id - ga - it - ja - jv - kn - kk - km - rw - ko - ku - ky - lo - la - lv - lt - lb - mk - mg - ms - ml - mt - mi - mr - mn - ne - no - ny - or - fa - pl - pt - pa - ro - ru - sm - gd - sr - st - sn - si - sk - sl - so - es - su - sw - sv - tl - tg - ta - tt - te - th - bo - tr - tk - ug - uk - ur - uz - vi - cy - wo - xh - yi - yo - zu pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity library_name: sentence-transformers license: apache-2.0 --- # LaBSE This is a port of the [LaBSE](https://tfhub.dev/google/LaBSE/1) model to PyTorch. It can be used to map 109 languages to a shared vector space. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/LaBSE') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/LaBSE) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors Have a look at [LaBSE](https://tfhub.dev/google/LaBSE/1) for the respective publication that describes LaBSE.
microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
microsoft
"2023-11-06T18:03:43Z"
352,659
191
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "exbert", "en", "arxiv:2007.15779", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: en tags: - exbert license: mit widget: - text: "[MASK] is a tumor suppressor gene." --- ## MSR BiomedBERT (abstracts + full text) <div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;"> * This model was previously named **"PubMedBERT (abstracts + full text)"**. * You can either adopt the new model name "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext" or update your `transformers` library to version 4.22+ if you need to refer to the old name. </div> Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. BiomedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and _full-text_ articles from [PubMedCentral](https://www.ncbi.nlm.nih.gov/pmc/). This model achieves state-of-the-art performance on many biomedical NLP tasks, and currently holds the top score on the [Biomedical Language Understanding and Reasoning Benchmark](https://aka.ms/BLURB). ## Citation If you find BiomedBERT useful in your research, please cite the following paper: ```latex @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing}, year = {2020}, eprint = {arXiv:2007.15779}, } ``` <a href="https://huggingface.co/exbert/?model=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
nfliu/deberta-v3-large_boolq
nfliu
"2023-09-08T05:40:57Z"
350,154
2
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:boolq", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-09-07T05:55:24Z"
--- license: mit base_model: microsoft/deberta-v3-large tags: - generated_from_trainer datasets: - boolq metrics: - accuracy model-index: - name: deberta-v3-large_boolq results: - task: name: Text Classification type: text-classification dataset: name: boolq type: boolq config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8834862385321101 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_boolq This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the boolq dataset. It achieves the following results on the evaluation set: - Loss: 0.4601 - Accuracy: 0.8835 ## Model description More information needed ## Example ``` import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("nfliu/deberta-v3-large_boolq") tokenizer = AutoTokenizer.from_pretrained("nfliu/deberta-v3-large_boolq") # Each example is a (question, context) pair. examples = [ ("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."), ("Water is wet", "Contrary to popular belief, water is not wet.") ] encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): model_output = model(**encoded_input) probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist() probability_no = [round(prob[0], 2) for prob in probabilities] probability_yes = [round(prob[1], 2) for prob in probabilities] for example, p_no, p_yes in zip(examples, probability_no, probability_yes): print(f"Question: {example[0]}") print(f"Context: {example[1]}") print(f"p(No | question, context): {p_no}") print(f"p(Yes | question, context): {p_yes}") print() ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.85 | 250 | 0.5306 | 0.8823 | | 0.1151 | 1.69 | 500 | 0.4601 | 0.8835 | | 0.1151 | 2.54 | 750 | 0.5897 | 0.8792 | | 0.0656 | 3.39 | 1000 | 0.6477 | 0.8804 | | 0.0656 | 4.24 | 1250 | 0.6847 | 0.8838 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
emrecan
"2022-01-24T23:55:40Z"
348,947
30
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "tr", "dataset:nli_tr", "dataset:emrecan/stsb-mt-turkish", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - tr pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - nli_tr - emrecan/stsb-mt-turkish widget: source_sentence: "Bu çok mutlu bir kişi" sentences: - "Bu mutlu bir köpek" - "Bu sevincinden havalara uçan bir insan" - "Çok kar yağıyor" --- # emrecan/bert-base-turkish-cased-mean-nli-stsb-tr This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on Turkish machine translated versions of [NLI](https://huggingface.co/datasets/nli_tr) and [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) datasets, using example [training scripts]( https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) from sentence-transformers GitHub repository. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"] model = SentenceTransformer('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr') model = AutoModel.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results Evaluation results on test and development sets are given below: | Split | Epoch | cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman | |------------|-------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------|--------------| | test | - | 0.834 | 0.830 | 0.820 | 0.819 | 0.819 | 0.818 | 0.799 | 0.789 | | validation | 1 | 0.850 | 0.848 | 0.831 | 0.835 | 0.83 | 0.83 | 0.80 | 0.806 | | validation | 2 | 0.857 | 0.857 | 0.844 | 0.848 | 0.844 | 0.848 | 0.813 | 0.810 | | validation | 3 | 0.860 | 0.859 | 0.846 | 0.851 | 0.846 | 0.850 | 0.825 | 0.822 | | validation | 4 | 0.859 | 0.860 | 0.846 | 0.851 | 0.846 | 0.851 | 0.825 | 0.823 | ## Training Training scripts [`training_nli_v2.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py) and [`training_stsbenchmark_continue_training.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark_continue_training.py) were used to train the model. The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 200, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 144, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
flair/ner-french
flair
"2023-04-07T09:54:46Z"
348,767
11
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "dataset:conll2003", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: fr datasets: - conll2003 widget: - text: "George Washington est allé à Washington" --- ## French NER in Flair (default model) This is the standard 4-class NER model for French that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90,61** (WikiNER) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-french") # make example sentence sentence = Sentence("George Washington est allé à Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.7394)] Span [6]: "Washington" [− Labels: LOC (0.9161)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington est allé à Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import WIKINER_FRENCH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = WIKINER_FRENCH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('fr'), # contextual string embeddings, forward FlairEmbeddings('fr-forward'), # contextual string embeddings, backward FlairEmbeddings('fr-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-french', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
zake7749/gemma-2-2b-it-chinese-kyara-dpo
zake7749
"2024-10-28T12:21:29Z"
348,318
6
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "zh", "en", "dataset:zake7749/kyara-chinese-math-sft-s0-30K", "dataset:zake7749/kyara-chinese-preference-rl-dpo-s0-30K", "dataset:zake7749/chinese-sft-stem-zh-hant", "dataset:zake7749/chinese-sft-stem-zh-hans", "arxiv:2304.12244", "arxiv:2308.07074", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "license:gemma", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-18T18:42:16Z"
--- language: - zh - en library_name: transformers base_model: - google/gemma-2-2b-it datasets: - zake7749/kyara-chinese-math-sft-s0-30K - zake7749/kyara-chinese-preference-rl-dpo-s0-30K - zake7749/chinese-sft-stem-zh-hant - zake7749/chinese-sft-stem-zh-hans model-index: - name: gemma-2-2b-it-chinese-kyara-dpo results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 53.82 name: strict accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.06 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 6.12 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 2.24 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 16.76 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 17.48 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zake7749/gemma-2-2b-it-chinese-kyara-dpo name: Open LLM Leaderboard license: gemma --- # Kyara: Knowledge Yielding Adaptive Retrieval Augmentation for LLM Fine-tuning [![DOI](https://zenodo.org/badge/844304447.svg)](https://zenodo.org/badge/latestdoi/844304447) <p align="left"> 🤗 <a href="https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo">Hugging Face</a>&nbsp; | 🚀<a href="https://github.com/zake7749/kyara">Github</a>&nbsp; | &nbsp;📑 <a href="#">Paper</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara/blob/main/document/README_EN.md">English</a>&nbsp; | &nbsp;📖 <a href="https://github.com/zake7749/kyara">Chinese</a>&nbsp; | &nbsp;💻 <a href="https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm">Kaggle Notebook</a> </p> <div style="text-align: center;"> <img src="https://i.imgur.com/QiWlcYJ.jpeg" alt="kyara"/> </div> Kyara (Knowledge Yielding Adaptive Retrieval Augmentation) is an experimental project aimed at improving language models through knowledge retrieval processes. The project seeks to enhance the model’s ability to adapt knowledge and improve language comprehension, particularly in underrepresented languages like Traditional Chinese. Given the relatively scarce availability of Traditional Chinese data compared to the vast corpus of English data used for model training, Kyara addresses this gap by expanding the limited corpus for this language. To validate Kyara's effectiveness, we conducted full-parameter fine-tuning on `Gemma-2-2b-it`, resulting in the first iteration of the Kyara model. Initial evaluation results, as detailed in the [Benchmark](#benchmark) section, demonstrate that Kyara outperforms the original `Gemma-2-2b-it` across various benchmarks, with notable improvements in Chinese language evaluations. ## Benchmark ### General Benchmark The following evaluations are based-on zero-shot. | Metric | Kyara-2b-it | Gemma-2-2b-it | |--------------------------|----------|-------------| | **[TMMLUPlus](https://huggingface.co/datasets/ikala/tmmluplus)** | **41.98** | 36.73 | | &emsp;- STEM | **43.73** | 37.84 | | &emsp;- Humanities | **38.72** | 33.40 | | &emsp;- Other | **40.61** | 36.00 | | &emsp;- Social-Science | **44.88** | 39.69 | | **[MMLU-Redux](https://github.com/yuchenlin/ZeroEval)** | **55.44**| 51.94 | | **[GSM8K](https://github.com/yuchenlin/ZeroEval)** | **54.21**| 51.63 | | **[MATH-L5](https://github.com/yuchenlin/ZeroEval)** | **8.88**| 4.3 | | **[CRUX](https://github.com/yuchenlin/ZeroEval)** | **22.75**| 21.5 | | **[ZebraLogic](https://github.com/yuchenlin/ZeroEval)** | **5.2**| 4.2 | | **Chinese-Reason-Bench** | **4.21** | 3.44 | The aggregation method for the groups in TMMLUPlus is macro average, following the practice in the official implementation. #### [Open-LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) As of now, Kyara-2b-it is the leading competitor among all 2b-scale models on the OpenLLM Leaderboard. <div style="text-align: center"> <img src="https://i.imgur.com/3NKhAja.png" alt="kyara-2b-it-open-llm-leaderboard"> </div> ### Alignment Benchmark | Metric | Kyara | Gemma-2-2b-it | ChatGPT-3.5-1106 | |--------------------------|----------|---------------|------------------| | **[AlpacaEval-LC](https://github.com/tatsu-lab/alpaca_eval)** | **35.35** | 32.37 | 19.30 | | **[AlpacaEval](https://github.com/tatsu-lab/alpaca_eval)** | **43.34** | 32.94 | 9.20 | | **[MT-Bench-TW](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2)** | **7.43** | 6.35 | 7.10 | | **[MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench)** | 8.28 | 8.17 | **8.32** | | **[Chatbot-Arena-Hard](https://github.com/lm-sys/arena-hard-auto)** | **22.60** | 19.4 | 18.87 | #### [AlignBench](https://github.com/THUDM/AlignBench) | Fold | Kyara-2b-it-CHT | Kyara-2b-it-CHS | Gemma-2-2b-it | ChatGPT-3.5-0613 | |-------|-----------------|-----------------|---------------| ---- | | Fundamental Language Ability | 6.72 | 6.54 | 6.42 | **6.92** | | Advanced Chinese Understanding | 5.78 | 5.24 | 5.03 | **5.91** | | Open-ended Questions | **8.16** | 7.79 | 7.52 | 6.47 | | Writing Ability | **7.90** | 7.24 | 7.76 | 7.28 | | Logical Reasoning | **5.26** | 4.27 | 4.20 | 4.79 | | Mathematics | **5.99** | 5.44 | 5.05 | 5.38 | | Task-oriented Role Play | **8.07** | 8.00 | 7.42 | 7.00 | | Professional Knowledge | **6.97** | 6.86 | 5.79 | 6.81 | | Reasoning AVG. | **5.62** | 4.85 | 4.63 | 5.00 | | Chinage Language AVG. | **7.26** | 6.94 | 6.66 | 6.73 | | Overall | **6.44** | 5.90 | 5.64 | 5.91 | where the postfixes CHT and CHS represent Traditional Chinese and Simplified Chinese, respectively. To evaluate the performance on Traditional Chinese in AlignBench, we used [OpenCC](https://github.com/BYVoid/OpenCC) with the `s2tw` configuration to convert all questions from Simplified Chinese to Traditional Chinese. ## Usage Kyara adopts the same architecture as Gemma2, utilizing identical inference and training methods. We have created a [Jupyter Notebook](https://www.kaggle.com/code/zake7749/kyara-a-compact-yet-powerful-chinese-llm) on Kaggle to demonstrate Kyara’s basic functionality. For service-level deployment, we recommend using Sglang or vllm to achieve greater throughput and robustness. ## Method The following sections provide a brief summary of Kyara's implementation strategy. ### Dataset Summary We have collected a total of 3.6M conversations, approximately 4.51 billion tokens. The following provides an overview of the language distribution and conversation rounds. * Language: <img src="https://i.imgur.com/fhD5kIy.png" alt="language-distribution" width="500"/> * Conversation Rounds: <img src="https://i.imgur.com/CWQ2shj.png" alt="conv-round-distribution" width="500"/> ### Dataset Construction The data construction for Kyara is divided into two parts: English and Chinese. For the English part, we have incorporated multiple high-quality open-source datasets, such as [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [arcee-ai/The-Tome](https://huggingface.co/datasets/arcee-ai/The-Tome), and performing semantic deduplication to drop out near-similar examples. As for the Chinese part, the construction follows the process outlined below: #### Base Dataset: Knowledge Injection with Retrieval Augmentation We developed a knowledge search system using open Chinese knowledge corpora, integrated with [QDrant](https://qdrant.tech/). To construct Supervised Fine-Tuning(SFT) pairs, we followed this process: 1. Sample documents from the knowledge base and generate knowledge-intensive questions that users might ask based on these texts. 2. (Optional) Increase instruction complexity using [Evol-Instruct](https://arxiv.org/pdf/2304.12244). 3. Apply query expansion on the generated instructions to retrieve additional Top K documents and individually assess their relevance: * For relevant documents, use an LLM to summarize key information related to the question. * For irrelevant documents, ignore them. 4. Let the LLM generate a detailed and comprehensive response according to the original document and K supplementary references. Besides, we would also aks LLM to generate an user prompt for high quality documents, and pair the (generated prompt, original document) as a SFT example. ##### Chinese Math Dataset * Dataset: [zake7749/kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K) While the aforementioned strategy can generate a wide range of knowledge-based texts, it primarily falls within the scope of information-seeking tasks and is not very effective in constructing mathematical and reasoning-related content. To address this, we generated 50,000 math problems based on [PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub). We then used `Gemini-1.5-Flash` to filter out data with obvious errors in calculation and reasoning, thereby creating [kyara-chinese-math-sft-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-math-sft-s0-30K). #### High Quality Dataset: Model Refinement After completing supervised learning using the base dataset, we will fine-tune the LLM again on a high-quality subset, primarily to address the following three issues: 1. Some responses in the Base Dataset were generated from small model, which sometimes performed poorly in following instructions. 2. We used various LLMs in the previous step to introduce knowledge diversity and language adaptability. However, we discovered subtle differences in response templates and reasoning approaches between different LLMs, leading to occasional instability in the trained Chat Model. Therefore, we would like to introduced a high-quality small dataset, using a single strong LLM to generate QA Pairs. 3. The Base Dataset includes some Q&A Pairs composed of generated queries and original document. While these data are rich in knowledge, they are relatively weak in terms of instruction following. To balance data diversity and quality, we adopted a strategy similar to [InsTag](https://arxiv.org/abs/2308.07074) to classify the data. We then used [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1) and an LLM Judge to evaluate data quality, finally extracting the best training data from each category to create the Stage 1 Dataset of about 500K, which was used to fine-tune the Kyara-SFT Model again. ### Preference Learning We introduced Preference Learning in Kyara, which allows the model's responses to better align with human preferences while enhancing programming skills and mathematical reasoning abilities. Kyara’s preference learning strategy utilizes Direct Preference Optimization (DPO), integrating two custom-built Chinese datasets alongside two English datasets. * [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) * [xinlai/Math-Step-DPO-10K](https://huggingface.co/datasets/xinlai/Math-Step-DPO-10K) Here, we summarize the construction strategy of the Chinese datasets. #### Chinese DPO ##### [SPIN/SPPO](https://github.com/uclaml/SPIN) We followed the original design, using Kyara-SFT to generate a set of contrastive data for the High Quality Dataset. ##### RLAIF Dataset: [zake7749/kyara-chinese-preference-dpo-s0-30K](https://huggingface.co/datasets/zake7749/kyara-chinese-preference-dpo-s0-30K) We extracted Chinese Prompts from `Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese`, `hfl/stem_zh_instruction`, and `FreedomIntelligence/Evol-Instruct-Chinese-GPT4`, and distributed the same prompt to four different LLMs. The competitors include: * GPT-4o * GPT-4-0618 * ChatGPT-3.5-0513 * Claude-Sonnet-3.5 * Yi-Large * Mixtral 8x22B * Gemini-Flash * Qwen2-72B-Instruct * DeepSeek V2 After response generation, we ask the LLMs to judge which one is better, using the following prompt: ``` **[Task]** Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. Your evaluation should consider correctness and helpfulness. 1. First, independently solve the user question step-by-step. 2. Then, compare both assistants’ answers with your answer. Identify and correct any mistakes. 3. Do not allow the length of the responses to influence your evaluation. 4. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: "[[A]]" if assistant A is better, "[[B]]" if assistant B is better, and "[[C]]" for a tie or if both A and B are bad. If the answers from A and B are very similar in terms of correctness, helpfulness, and relevance, meaning there is no "obvious" winner, judge it as a tie and output [[C]]. **[User Question]** {prompt} --- **[Assistant A’s Answer]** {answer} --- **[Assistant B’s Answer]** {prediction} --- ``` Finally, all four datasets were combined for DPO training. ## Feature ### Retrieval Augmented Generation (Experimental) Benefiting from Kyara's training method, we incorporated RAG-related content during the SFT phase. You can refer to the following examples to construct task templates: #### Input ``` # Reference Document <reference> <document> Document ID: id_27025b13 * Document Title: Flash_memory * Document Text: Another limitation of flash memory is its limited number of erase cycles (most commercial SLC flash memory guarantees around 100,000 erase cycles for the "0" zone, but due to manufacturing precision, other blocks are not guaranteed, and some might even have factory defects and be unusable). This limitation is partly offset by firmware or file systems that calculate write counts and perform dynamic remapping to distribute writes across different blocks; this technique is called wear leveling. Another method is known as Bad Block Management (BBM), where blocks are dynamically tested during write operations, and failed blocks are discarded. For most mobile devices, these wear management techniques can extend the life of internal flash memory (sometimes even beyond the device's lifespan). Moreover, partial data loss in these devices may be acceptable. However, for high-reliability data storage applications that require heavy data write cycles, flash memory is not recommended. But this limitation does not apply to read-only applications, such as routers and thin clients, which often only write once or a few times throughout their lifespan. ### Read Disturbance </document> <document> Document ID: id_858b1787 * Document Title: Flash_memory * Document Text: * TLC NAND flash memory typically has an endurance of around 1,000 or more cycles (Samsung 840); using multi-layer structures and adopting LDPC correction have extended the endurance. * QLC NAND flash memory can have an endurance ranging from 500 to 1,000 cycles. * SLC floating-gate NOR flash memory typically has a write endurance of 100,000 to 1,000,000 cycles (Numonyx M58BW 100k; Spansion S29CD016J 1,000k). * MLC floating-gate NOR flash memory usually has a write endurance of 100,000 cycles (Numonyx J3 flash). These values are approximate and depend on the technology and positioning of different manufacturers' products. Finer process technologies can improve read/write performance and capacity, but they may also pose greater challenges in terms of write endurance. Specific algorithms and design examples, such as wear leveling and memory over-provisioning, can be used to adjust storage system endurance to meet specific needs. Wear leveling is essential for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs. ## Flash Memory File Systems </document> <document> Document ID: id_df34eb65 * Document Title: Memory_over-provisioning * Document Text: ## Basic SSD Operations Due to the nature of flash memory operations, data cannot be overwritten directly like in hard drives. When data is first written to an SSD, the cells are in an erased state, so the data can be written directly, one page at a time (usually 4 to 8 KB in size). The SSD controller, which manages the flash memory and interfaces with the main control system, uses a logical-to-physical mapping system called Logical Block Addressing (LBA), part of the flash translation layer (FTL). When new data needs to replace old data, the SSD controller writes the new data to a new location and updates the logical mapping to point to the new physical location. The original data becomes invalid and must be erased before it can be rewritten. Flash memory has a limited number of program/erase (P/E) cycles. Typically, this is expressed as the maximum number of P/E cycles that flash memory can endure over its lifetime. Single-level cell (SLC) flash memory is generally designed for high performance and long life, typically supporting 50,000 to 100,000 cycles. As of 2011, multi-level cell (MLC) flash memory, designed for low-cost applications, has far fewer cycles, usually only 3,000 to 5,000 cycles. Since 2013, triple-level cell (TLC) flash memory has been introduced, with P/E cycles dropping to around 1,000. The lower the write amplification, the better, as it corresponds to fewer P/E cycles, which extends the lifespan of the SSD. </document> </reference> --- # Task Description Please refer to the content in the <reference> above and answer the user's question. When answering, pay attention to the following points: 1. Appropriately cite the reference document ID to support your argument. For example: If we have Article 1 with ID: dq945509 and Article 2 with ID: 6a178c5, you can cite as follows: "This is a factual statement"【dq945509】 and "This is another factual statement"【6a178c5】. 2. If you find that the reference document lacks relevant information to the user's question, output: "The reference documents lack information relevant to the question." Do not fabricate unrelated answers. # User's Question How does the write limit of memory cards affect their lifespan? What techniques can extend the lifespan of memory cards in scenarios with frequent writes? ``` #### Output ``` The write limit of memory cards, particularly flash memory, significantly affects their lifespan due to the limited number of program/erase (P/E) cycles they can endure. For instance, single-level cell (SLC) flash memory, which is designed for high performance and long life, typically supports 50,000 to 100,000 cycles【id_df34eb65】. In contrast, multi-level cell (MLC) flash memory, which is more cost-effective, has fewer cycles, usually around 3,000 to 5,000 cycles【id_df34eb65】. As of 2013, triple-level cell (TLC) flash memory has even lower endurance, with around 1,000 cycles【id_df34eb65】. To mitigate the impact of these limitations, several techniques can extend the lifespan of memory cards in scenarios with frequent writes: 1. **Wear Leveling**: This technique involves distributing writes across different blocks to prevent any single block from being overworked. Wear leveling is crucial for ensuring the lifespan of flash memory products, and it is supported in products like USB flash drives and SSDs【id_858b1787】. 2. **Bad Block Management (BBM)**: In this method, blocks are dynamically tested during write operations, and failed blocks are discarded. This ensures that only healthy blocks are used for data storage, thereby extending the overall lifespan of the memory card【id_27025b13】. 3. **Memory Over-Provisioning**: This involves allocating extra storage space beyond the actual capacity of the flash memory to account for wear and tear. This can help to maintain performance and extend the lifespan of the memory card by allowing for more writes before reaching the physical limits of the memory cells【id_df34eb65】. These techniques are particularly important for mobile devices and applications that require frequent data writes, such as routers and thin clients, where the partial data loss in these devices may be acceptable【id_27025b13】. ``` Please refer to [Kyara-RAG](https://github.com/zake7749/kyara-rag) for more details. ## Limitation Like most models, Kyara still suffers from hallucinations, and possibly due to the nature of the data, it particularly enjoys quoting references when answering questions. For example, if you ask the model to explain a line from "Legend of Mortal": ``` 該如何理解:「俠者,不受名利所牽,但求俯仰無愧。正邪存乎在我,聲名留問諸君。非我桀傲不馴,乃因唐門薪火,自古如今!」 ``` However, the model would respond that this quote is from The "Legend of the Condor Heroes." ``` 這句話出自《神雕俠侶》中的《神雕手記》,是楊過對自己處世態度的一種表達。讓我們逐句拆解理解: 1. **「俠者,不受名利所牽,但求俯仰無愧。」** - **俠者**:指俠義之士,講究義氣和正義的英雄。 - **不受名利所牽**:不受外在名利和利益的影響,保持自我的正直。 - **但求俯仰無愧**:只求自己行動的坦蕩和內心的清淨,不求外界的認可或評價。 2. **「正邪存乎在我,聲名留問諸君。」** - **正邪存乎在我**:善惡的歸屬在於自己,決策和行動的結果在於自己。 - **聲名留問諸君**:聲名是別人會問津的,但自己對正義的追求和內心的正直,才是最值得自豪和珍視的。 3. **「非我桀傲不馴,乃因唐門薪火,自古如今!」** - **非我桀傲不馴**:並不是我桀驁不馴(驕傲自負),而是因唐門的榮耀和傳承,我必須遵循這些責任。 - **乃因唐門薪火**:是因為唐門(一種武學門派)的薪火相傳,這是一份深重的責任。 - **自古如今**:自古至今,俠義精神和門派的責任感,一直是人們所崇敬的,這是歷史和傳統的延續。 總結起來,這段話表達了楊過對於自己行為的獨特理解和自豪感。他明白自己的行為和價值觀取決於個人的內心和對正義的追求,而非外界的評價和名利。他也承認了自己的責任,作為唐門下一代,必須繼承和發揚門風,這一點是無可替代的。 ``` It is recommended to exercise caution when using language models. ## Citation ```bibtex @software{Yang_Kyara_2024, author = {Yang, Kai-Chou}, doi = {10.5281/zenodo.14001626}, month = oct, title = {{Kyara}}, url = {https://github.com/zake7749/Kyara}, version = {1.0.0}, year = {2024} } ```
distilbert/distilbert-base-cased-distilled-squad
distilbert
"2024-05-06T13:46:31Z"
347,183
217
transformers
[ "transformers", "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:04Z"
--- language: en license: apache-2.0 datasets: - squad metrics: - squad model-index: - name: distilbert-base-cased-distilled-squad results: - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 79.5998 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTViZDA2Y2E2NjUyMjNjYjkzNTUzODc5OTk2OTNkYjQxMDRmMDhlYjdmYWJjYWQ2N2RlNzY1YmI3OWY1NmRhOSIsInZlcnNpb24iOjF9.ZJHhboAMwsi3pqU-B-XKRCYP_tzpCRb8pEjGr2Oc-TteZeoWHI8CXcpDxugfC3f7d_oBcKWLzh3CClQxBW1iAQ - type: f1 value: 86.9965 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZlMzY2MmE1NDNhOGNjNWRmODg0YjQ2Zjk5MjUzZDQ2MDYxOTBlMTNhNzQ4NTA2NjRmNDU3MGIzMTYwMmUyOSIsInZlcnNpb24iOjF9.z0ZDir87aT7UEmUeDm8Uw0oUdAqzlBz343gwnsQP3YLfGsaHe-jGlhco0Z7ISUd9NokyCiJCRc4NNxJQ83IuCw --- # DistilBERT base cased distilled SQuAD ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad). - **Developed by:** Hugging Face - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** Apache 2.0 - **Related Models:** [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased) - **Resources for more information:** - See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model) - See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure ## How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad') >>> context = r""" ... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a ... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune ... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script. ... """ >>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'SQuAD dataset', score: 0.5152, start: 147, end: 160 ``` Here is how to use this model in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased-distilled-squad') model = DistilBertModel.from_pretrained('distilbert-base-cased-distilled-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) print(outputs) ``` And in TensorFlow: ```python from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering import tensorflow as tf tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad") model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad") question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="tf") outputs = model(**inputs) answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ``` ## Uses This model can be used for question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad') >>> context = r""" ... Alice is sitting on the bench. Bob is sitting next to her. ... """ >>> result = question_answerer(question="Who is the CEO?", context=context) >>> print( ... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" ...) Answer: 'Bob', score: 0.7527, start: 32, end: 35 ``` Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The [distilbert-base-cased model](https://huggingface.co/distilbert-base-cased) was trained using the same data as the [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased). The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as: > DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad). #### Training Procedure ##### Preprocessing See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details. ##### Pretraining See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details. ## Evaluation As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md) > This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7). ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. - **Hardware Type:** 8 16GB V100 GPUs - **Hours used:** 90 hours - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ``` APA: - Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. ## Model Card Authors This model card was written by the Hugging Face team.
fxmarty/really-tiny-falcon-testing
fxmarty
"2023-09-16T12:45:28Z"
346,357
0
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "custom_code", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-16T08:46:32Z"
--- license: mit --- tiny = <10 MB
blanchefort/rubert-base-cased-sentiment-rusentiment
blanchefort
"2023-04-06T04:06:16Z"
344,097
11
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuSentiment", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - ru tags: - sentiment - text-classification datasets: - RuSentiment --- # RuBERT for Sentiment Analysis This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/). ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-rusentiment', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)** > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018.
Leo97/KoELECTRA-small-v3-modu-ner
Leo97
"2023-04-07T05:57:01Z"
342,684
16
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "electra", "token-classification", "generated_from_trainer", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-03-29T05:41:43Z"
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: KoELECTRA-small-v3-modu-ner results: [] language: - ko pipeline_tag: token-classification widget: - text: "서울역으로 안내해줘." example_title: "Example 1" - text: "에어컨 온도 3도 올려줘." example_title: "Example 2" - text: "아이유 노래 검색해줘." example_title: "Example 3" --- # KoELECTRA-small-v3-modu-ner This model is a fine-tuned version of [monologg/koelectra-small-v3-discriminator](https://huggingface.co/monologg/koelectra-small-v3-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1431 - Precision: 0.8232 - Recall: 0.8449 - F1: 0.8339 - Accuracy: 0.9628 ## Model description 태깅 시스템 : BIO 시스템 - B-(begin) : 개체명이 시작할 때 - I-(inside) : 토큰이 개체명 중간에 있을 때 - O(outside) : 토큰이 개체명이 아닐 경우 한국정보통신기술협회(TTA) 대분류 기준을 따르는 15 가지의 태그셋 | 분류 | 표기 | 정의 | |:------------:|:---:|:-----------| | ARTIFACTS | AF | 사람에 의해 창조된 인공물로 문화재, 건물, 악기, 도로, 무기, 운송수단, 작품명, 공산품명이 모두 이에 해당 | | ANIMAL | AM | 사람을 제외한 짐승 | | CIVILIZATION | CV | 문명/문화 | | DATE | DT | 기간 및 계절, 시기/시대 | | EVENT | EV | 특정 사건/사고/행사 명칭 | | STUDY_FIELD | FD | 학문 분야, 학파 및 유파 | | LOCATION | LC | 지역/장소와 지형/지리 명칭 등을 모두 포함 | | MATERIAL | MT | 원소 및 금속, 암석/보석, 화학물질 | | ORGANIZATION | OG | 기관 및 단체 명칭 | | PERSON | PS | 인명 및 인물의 별칭 (유사 인물 명칭 포함) | | PLANT | PT | 꽃/나무, 육지식물, 해초류, 버섯류, 이끼류 | | QUANTITY | QT | 수량/분량, 순서/순차, 수사로 이루어진 표현 | | TIME | TI | 시계상으로 나타나는 시/시각, 시간 범위 | | TERM | TM | 타 개체명에서 정의된 세부 개체명 이외의 개체명 | | THEORY | TR | 특정 이론, 법칙 원리 등 | ## Intended uses & limitations ### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Leo97/KoELECTRA-small-v3-modu-ner") model = AutoModelForTokenClassification.from_pretrained("Leo97/KoELECTRA-small-v3-modu-ner") ner = pipeline("ner", model=model, tokenizer=tokenizer) example = "서울역으로 안내해줘." ner_results = ner(example) print(ner_results) ``` ## Training and evaluation data 개체명 인식(NER) 모델 학습 데이터 셋 - 문화체육관광부 > 국립국어원 > 모두의 말뭉치 > 개체명 분석 말뭉치 2021 - https://corpus.korean.go.kr/request/reausetMain.do ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15151 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 3788 | 0.3978 | 0.5986 | 0.5471 | 0.5717 | 0.9087 | | No log | 2.0 | 7576 | 0.2319 | 0.6986 | 0.6953 | 0.6969 | 0.9345 | | No log | 3.0 | 11364 | 0.1838 | 0.7363 | 0.7612 | 0.7486 | 0.9444 | | No log | 4.0 | 15152 | 0.1610 | 0.7762 | 0.7745 | 0.7754 | 0.9509 | | No log | 5.0 | 18940 | 0.1475 | 0.7862 | 0.8011 | 0.7936 | 0.9545 | | No log | 6.0 | 22728 | 0.1417 | 0.7857 | 0.8181 | 0.8016 | 0.9563 | | No log | 7.0 | 26516 | 0.1366 | 0.8022 | 0.8196 | 0.8108 | 0.9584 | | No log | 8.0 | 30304 | 0.1346 | 0.8093 | 0.8236 | 0.8164 | 0.9596 | | No log | 9.0 | 34092 | 0.1328 | 0.8085 | 0.8299 | 0.8190 | 0.9602 | | No log | 10.0 | 37880 | 0.1332 | 0.8110 | 0.8368 | 0.8237 | 0.9608 | | No log | 11.0 | 41668 | 0.1323 | 0.8157 | 0.8347 | 0.8251 | 0.9612 | | No log | 12.0 | 45456 | 0.1353 | 0.8118 | 0.8402 | 0.8258 | 0.9611 | | No log | 13.0 | 49244 | 0.1370 | 0.8152 | 0.8416 | 0.8282 | 0.9616 | | No log | 14.0 | 53032 | 0.1368 | 0.8164 | 0.8415 | 0.8287 | 0.9616 | | No log | 15.0 | 56820 | 0.1378 | 0.8187 | 0.8438 | 0.8310 | 0.9621 | | No log | 16.0 | 60608 | 0.1389 | 0.8217 | 0.8438 | 0.8326 | 0.9626 | | No log | 17.0 | 64396 | 0.1380 | 0.8266 | 0.8426 | 0.8345 | 0.9631 | | No log | 18.0 | 68184 | 0.1428 | 0.8216 | 0.8445 | 0.8329 | 0.9625 | | No log | 19.0 | 71972 | 0.1431 | 0.8232 | 0.8455 | 0.8342 | 0.9628 | | 0.1712 | 20.0 | 75760 | 0.1431 | 0.8232 | 0.8449 | 0.8339 | 0.9628 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
stepfun-ai/GOT-OCR2_0
stepfun-ai
"2024-09-18T02:02:32Z"
342,549
1,180
transformers
[ "transformers", "safetensors", "GOT", "feature-extraction", "got", "vision-language", "ocr2.0", "custom_code", "image-text-to-text", "multilingual", "arxiv:2409.01704", "arxiv:2405.14295", "arxiv:2312.06109", "license:apache-2.0", "region:us" ]
image-text-to-text
"2024-09-12T16:02:28Z"
--- pipeline_tag: image-text-to-text library_name: transformers language: - multilingual tags: - got - vision-language - ocr2.0 - custom_code license: apache-2.0 --- <h1>General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model </h1> [🔋Online Demo](https://huggingface.co/spaces/ucaslcl/GOT_online) | [🌟GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/) | [📜Paper](https://arxiv.org/abs/2409.01704)</a> [Haoran Wei*](https://scholar.google.com/citations?user=J4naK0MAAAAJ&hl=en), Chenglong Liu*, Jinyue Chen, Jia Wang, Lingyu Kong, Yanming Xu, [Zheng Ge](https://joker316701882.github.io/), Liang Zhao, [Jianjian Sun](https://scholar.google.com/citations?user=MVZrGkYAAAAJ&hl=en), [Yuang Peng](https://scholar.google.com.hk/citations?user=J0ko04IAAAAJ&hl=zh-CN&oi=ao), Chunrui Han, [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6653eee7a2d7a882a805ab95/QCEFY-M_YG3Bp5fn1GQ8X.jpeg) ## Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10: ``` torch==2.0.1 torchvision==0.15.2 transformers==4.37.2 tiktoken==0.6.0 verovio==4.3.1 accelerate==0.28.0 ``` ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('ucaslcl/GOT-OCR2_0', trust_remote_code=True) model = AutoModel.from_pretrained('ucaslcl/GOT-OCR2_0', trust_remote_code=True, low_cpu_mem_usage=True, device_map='cuda', use_safetensors=True, pad_token_id=tokenizer.eos_token_id) model = model.eval().cuda() # input your test image image_file = 'xxx.jpg' # plain texts OCR res = model.chat(tokenizer, image_file, ocr_type='ocr') # format texts OCR: # res = model.chat(tokenizer, image_file, ocr_type='format') # fine-grained OCR: # res = model.chat(tokenizer, image_file, ocr_type='ocr', ocr_box='') # res = model.chat(tokenizer, image_file, ocr_type='format', ocr_box='') # res = model.chat(tokenizer, image_file, ocr_type='ocr', ocr_color='') # res = model.chat(tokenizer, image_file, ocr_type='format', ocr_color='') # multi-crop OCR: # res = model.chat_crop(tokenizer, image_file, ocr_type='ocr') # res = model.chat_crop(tokenizer, image_file, ocr_type='format') # render the formatted OCR results: # res = model.chat(tokenizer, image_file, ocr_type='format', render=True, save_render_file = './demo.html') print(res) ``` More details about 'ocr_type', 'ocr_box', 'ocr_color', and 'render' can be found at our GitHub. Our training codes are available at our [GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/). ## More Multimodal Projects 👏 Welcome to explore more multimodal projects of our team: [Vary](https://github.com/Ucas-HaoranWei/Vary) | [Fox](https://github.com/ucaslcl/Fox) | [OneChart](https://github.com/LingyvKong/OneChart) ## Citation If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️! ```bib @article{wei2024general, title={General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model}, author={Wei, Haoran and Liu, Chenglong and Chen, Jinyue and Wang, Jia and Kong, Lingyu and Xu, Yanming and Ge, Zheng and Zhao, Liang and Sun, Jianjian and Peng, Yuang and others}, journal={arXiv preprint arXiv:2409.01704}, year={2024} } @article{liu2024focus, title={Focus Anywhere for Fine-grained Multi-page Document Understanding}, author={Liu, Chenglong and Wei, Haoran and Chen, Jinyue and Kong, Lingyu and Ge, Zheng and Zhu, Zining and Zhao, Liang and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu}, journal={arXiv preprint arXiv:2405.14295}, year={2024} } @article{wei2023vary, title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models}, author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu}, journal={arXiv preprint arXiv:2312.06109}, year={2023} } ```
EleutherAI/gpt-neo-125m
EleutherAI
"2024-01-31T20:29:39Z"
342,003
182
transformers
[ "transformers", "pytorch", "jax", "rust", "safetensors", "gpt_neo", "text-generation", "text generation", "causal-lm", "en", "dataset:EleutherAI/pile", "arxiv:2101.00027", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: - en tags: - text generation - pytorch - causal-lm license: mit datasets: - EleutherAI/pile --- # GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=20) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neo-125m) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.79 | | ARC (25-shot) | 22.95 | | HellaSwag (10-shot) | 30.26 | | MMLU (5-shot) | 25.97 | | TruthfulQA (0-shot) | 45.58 | | Winogrande (5-shot) | 51.78 | | GSM8K (5-shot) | 0.3 | | DROP (3-shot) | 3.69 |
deepset/bert-large-uncased-whole-word-masking-squad2
deepset
"2024-09-24T15:54:10Z"
341,061
29
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/bert-large-uncased-whole-word-masking-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 80.8846 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsInZlcnNpb24iOjF9.aSblF4ywh1fnHHrN6UGL392R5KLaH3FCKQlpiXo_EdQ4XXEAENUCjYm9HWDiFsgfSENL35GkbSyz_GAhnefsAQ - type: f1 value: 83.8765 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFlNmEzMTk2NjRkNTI3ZTk3ZTU1NWNlYzIyN2E0ZDFlNDA2ZjYwZWJlNThkMmRmMmE0YzcwYjIyZDM5NmRiMCIsInZlcnNpb24iOjF9.-rc2_Bsp_B26-o12MFYuAU0Ad2Hg9PDx7Preuk27WlhYJDeKeEr32CW8LLANQABR3Mhw2x8uTYkEUrSDMxxLBw - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 85.904 name: Exact Match - type: f1 value: 92.586 name: F1 - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 28.233 name: Exact Match - type: f1 value: 41.170 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 78.064 name: Exact Match - type: f1 value: 83.591 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 65.615 name: Exact Match - type: f1 value: 80.733 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 81.570 name: Exact Match - type: f1 value: 91.199 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 83.279 name: Exact Match - type: f1 value: 91.090 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 69.305 name: Exact Match - type: f1 value: 82.405 name: F1 --- # bert-large-uncased-whole-word-masking-squad2 for Extractive QA This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering. ## Overview **Language model:** bert-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/bert-large-uncased-whole-word-masking-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/bert-large-uncased-whole-word-masking-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
microsoft/unixcoder-base
microsoft
"2024-07-31T05:20:05Z"
339,278
46
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "en", "arxiv:2203.03850", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-23T05:47:38Z"
--- language: - en license: apache-2.0 --- # Model Card for UniXcoder-base # Model Details ## Model Description UniXcoder is a unified cross-modal pre-trained model that leverages multimodal data (i.e. code comment and AST) to pretrain code representation. - **Developed by:** Microsoft Team - **Shared by [Optional]:** Hugging Face - **Model type:** Feature Engineering - **Language(s) (NLP):** en - **License:** Apache-2.0 - **Related Models:** - **Parent Model:** RoBERTa - **Resources for more information:** - [Associated Paper](https://arxiv.org/abs/2203.03850) # Uses ## 1. Dependency - pip install torch - pip install transformers ## 2. Quick Tour We implement a class to use UniXcoder and you can follow the code to build UniXcoder. You can download the class by ```shell wget https://raw.githubusercontent.com/microsoft/CodeBERT/master/UniXcoder/unixcoder.py ``` ```python import torch from unixcoder import UniXcoder device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = UniXcoder("microsoft/unixcoder-base") model.to(device) ``` In the following, we will give zero-shot examples for several tasks under different mode, including **code search (encoder-only)**, **code completion (decoder-only)**, **function name prediction (encoder-decoder)** , **API recommendation (encoder-decoder)**, **code summarization (encoder-decoder)**. ## 3. Encoder-only Mode For encoder-only mode, we give an example of **code search**. ### 1) Code and NL Embeddings Here, we give an example to obtain code fragment embedding from CodeBERT. ```python # Encode maximum function func = "def f(a,b): if a>b: return a else return b" tokens_ids = model.tokenize([func],max_length=512,mode="<encoder-only>") source_ids = torch.tensor(tokens_ids).to(device) tokens_embeddings,max_func_embedding = model(source_ids) # Encode minimum function func = "def f(a,b): if a<b: return a else return b" tokens_ids = model.tokenize([func],max_length=512,mode="<encoder-only>") source_ids = torch.tensor(tokens_ids).to(device) tokens_embeddings,min_func_embedding = model(source_ids) # Encode NL nl = "return maximum value" tokens_ids = model.tokenize([nl],max_length=512,mode="<encoder-only>") source_ids = torch.tensor(tokens_ids).to(device) tokens_embeddings,nl_embedding = model(source_ids) print(max_func_embedding.shape) print(max_func_embedding) ``` ```python torch.Size([1, 768]) tensor([[ 8.6533e-01, -1.9796e+00, -8.6849e-01, 4.2652e-01, -5.3696e-01, -1.5521e-01, 5.3770e-01, 3.4199e-01, 3.6305e-01, -3.9391e-01, -1.1816e+00, 2.6010e+00, -7.7133e-01, 1.8441e+00, 2.3645e+00, ..., -2.9188e+00, 1.2555e+00, -1.9953e+00, -1.9795e+00, 1.7279e+00, 6.4590e-01, -5.2769e-02, 2.4965e-01, 2.3962e-02, 5.9996e-02, 2.5659e+00, 3.6533e+00, 2.0301e+00]], device='cuda:0', grad_fn=<DivBackward0>) ``` ### 2) Similarity between code and NL Now, we calculate cosine similarity between NL and two functions. Although the difference of two functions is only a operator (```<``` and ```>```), UniXcoder can distinguish them. ```python # Normalize embedding norm_max_func_embedding = torch.nn.functional.normalize(max_func_embedding, p=2, dim=1) norm_min_func_embedding = torch.nn.functional.normalize(min_func_embedding, p=2, dim=1) norm_nl_embedding = torch.nn.functional.normalize(nl_embedding, p=2, dim=1) max_func_nl_similarity = torch.einsum("ac,bc->ab",norm_max_func_embedding,norm_nl_embedding) min_func_nl_similarity = torch.einsum("ac,bc->ab",norm_min_func_embedding,norm_nl_embedding) print(max_func_nl_similarity) print(min_func_nl_similarity) ``` ```python tensor([[0.3002]], device='cuda:0', grad_fn=<ViewBackward>) tensor([[0.1881]], device='cuda:0', grad_fn=<ViewBackward>) ``` ## 3. Decoder-only Mode For decoder-only mode, we give an example of **code completion**. ```python context = """ def f(data,file_path): # write json data into file_path in python language """ tokens_ids = model.tokenize([context],max_length=512,mode="<decoder-only>") source_ids = torch.tensor(tokens_ids).to(device) prediction_ids = model.generate(source_ids, decoder_only=True, beam_size=3, max_length=128) predictions = model.decode(prediction_ids) print(context+predictions[0][0]) ``` ```python def f(data,file_path): # write json data into file_path in python language data = json.dumps(data) with open(file_path, 'w') as f: f.write(data) ``` ## 4. Encoder-Decoder Mode For encoder-decoder mode, we give two examples including: **function name prediction**, **API recommendation**, **code summarization**. ### 1) **Function Name Prediction** ```python context = """ def <mask0>(data,file_path): data = json.dumps(data) with open(file_path, 'w') as f: f.write(data) """ tokens_ids = model.tokenize([context],max_length=512,mode="<encoder-decoder>") source_ids = torch.tensor(tokens_ids).to(device) prediction_ids = model.generate(source_ids, decoder_only=False, beam_size=3, max_length=128) predictions = model.decode(prediction_ids) print([x.replace("<mask0>","").strip() for x in predictions[0]]) ``` ```python ['write_json', 'write_file', 'to_json'] ``` ### 2) API Recommendation ```python context = """ def write_json(data,file_path): data = <mask0>(data) with open(file_path, 'w') as f: f.write(data) """ tokens_ids = model.tokenize([context],max_length=512,mode="<encoder-decoder>") source_ids = torch.tensor(tokens_ids).to(device) prediction_ids = model.generate(source_ids, decoder_only=False, beam_size=3, max_length=128) predictions = model.decode(prediction_ids) print([x.replace("<mask0>","").strip() for x in predictions[0]]) ``` ```python ['json.dumps', 'json.loads', 'str'] ``` ### 3) Code Summarization ```python context = """ # <mask0> def write_json(data,file_path): data = json.dumps(data) with open(file_path, 'w') as f: f.write(data) """ tokens_ids = model.tokenize([context],max_length=512,mode="<encoder-decoder>") source_ids = torch.tensor(tokens_ids).to(device) prediction_ids = model.generate(source_ids, decoder_only=False, beam_size=3, max_length=128) predictions = model.decode(prediction_ids) print([x.replace("<mask0>","").strip() for x in predictions[0]]) ``` ```python ['Write JSON to file', 'Write json to file', 'Write a json file'] ``` # Reference If you use this code or UniXcoder, please consider citing us. <pre><code>@article{guo2022unixcoder, title={UniXcoder: Unified Cross-Modal Pre-training for Code Representation}, author={Guo, Daya and Lu, Shuai and Duan, Nan and Wang, Yanlin and Zhou, Ming and Yin, Jian}, journal={arXiv preprint arXiv:2203.03850}, year={2022} }</code></pre>
jinaai/jina-embeddings-v2-small-en
jinaai
"2024-05-15T14:11:20Z"
337,561
128
sentence-transformers
[ "sentence-transformers", "pytorch", "coreml", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "custom_code", "en", "dataset:jinaai/negation-dataset", "arxiv:2108.12409", "arxiv:2310.19923", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2023-09-27T20:17:27Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb datasets: - jinaai/negation-dataset language: en inference: false license: apache-2.0 model-index: - name: jina-embedding-s-en-v2 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.35820895522387 - type: ap value: 33.99931933598115 - type: f1 value: 65.3853685535555 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 82.90140000000001 - type: ap value: 78.01434597815617 - type: f1 value: 82.83357802722676 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.88999999999999 - type: f1 value: 39.209432767163456 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 23.257 - type: map_at_10 value: 37.946000000000005 - type: map_at_100 value: 39.17 - type: map_at_1000 value: 39.181 - type: map_at_3 value: 32.99 - type: map_at_5 value: 35.467999999999996 - type: mrr_at_1 value: 23.541999999999998 - type: mrr_at_10 value: 38.057 - type: mrr_at_100 value: 39.289 - type: mrr_at_1000 value: 39.299 - type: mrr_at_3 value: 33.096 - type: mrr_at_5 value: 35.628 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 46.729 - type: ndcg_at_100 value: 51.900999999999996 - type: ndcg_at_1000 value: 52.16 - type: ndcg_at_3 value: 36.323 - type: ndcg_at_5 value: 40.766999999999996 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 7.510999999999999 - type: precision_at_100 value: 0.976 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.339 - type: precision_at_5 value: 11.350999999999999 - type: recall_at_1 value: 23.257 - type: recall_at_10 value: 75.107 - type: recall_at_100 value: 97.58200000000001 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 46.017 - type: recall_at_5 value: 56.757000000000005 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.02420878391967 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 35.16136856000258 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.61809790513646 - type: mrr value: 73.07215406938397 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.0167350090749 - type: cos_sim_spearman value: 80.51569002630401 - type: euclidean_pearson value: 81.46820525099726 - type: euclidean_spearman value: 80.51569002630401 - type: manhattan_pearson value: 81.35596555056757 - type: manhattan_spearman value: 80.12592210903303 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.25 - type: f1 value: 77.34950913540605 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.57238596005698 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 29.066444306196683 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.891000000000002 - type: map_at_10 value: 42.772 - type: map_at_100 value: 44.108999999999995 - type: map_at_1000 value: 44.236 - type: map_at_3 value: 39.289 - type: map_at_5 value: 41.113 - type: mrr_at_1 value: 39.342 - type: mrr_at_10 value: 48.852000000000004 - type: mrr_at_100 value: 49.534 - type: mrr_at_1000 value: 49.582 - type: mrr_at_3 value: 46.089999999999996 - type: mrr_at_5 value: 47.685 - type: ndcg_at_1 value: 39.342 - type: ndcg_at_10 value: 48.988 - type: ndcg_at_100 value: 53.854 - type: ndcg_at_1000 value: 55.955 - type: ndcg_at_3 value: 43.877 - type: ndcg_at_5 value: 46.027 - type: precision_at_1 value: 39.342 - type: precision_at_10 value: 9.285 - type: precision_at_100 value: 1.488 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 20.696 - type: precision_at_5 value: 14.878 - type: recall_at_1 value: 31.891000000000002 - type: recall_at_10 value: 60.608 - type: recall_at_100 value: 81.025 - type: recall_at_1000 value: 94.883 - type: recall_at_3 value: 45.694 - type: recall_at_5 value: 51.684 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.778 - type: map_at_10 value: 37.632 - type: map_at_100 value: 38.800000000000004 - type: map_at_1000 value: 38.934999999999995 - type: map_at_3 value: 35.293 - type: map_at_5 value: 36.547000000000004 - type: mrr_at_1 value: 35.35 - type: mrr_at_10 value: 42.936 - type: mrr_at_100 value: 43.69 - type: mrr_at_1000 value: 43.739 - type: mrr_at_3 value: 41.062 - type: mrr_at_5 value: 42.097 - type: ndcg_at_1 value: 35.35 - type: ndcg_at_10 value: 42.528 - type: ndcg_at_100 value: 46.983000000000004 - type: ndcg_at_1000 value: 49.187999999999995 - type: ndcg_at_3 value: 39.271 - type: ndcg_at_5 value: 40.654 - type: precision_at_1 value: 35.35 - type: precision_at_10 value: 7.828 - type: precision_at_100 value: 1.3010000000000002 - type: precision_at_1000 value: 0.17700000000000002 - type: precision_at_3 value: 18.96 - type: precision_at_5 value: 13.120999999999999 - type: recall_at_1 value: 28.778 - type: recall_at_10 value: 50.775000000000006 - type: recall_at_100 value: 69.66799999999999 - type: recall_at_1000 value: 83.638 - type: recall_at_3 value: 40.757 - type: recall_at_5 value: 44.86 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 37.584 - type: map_at_10 value: 49.69 - type: map_at_100 value: 50.639 - type: map_at_1000 value: 50.702999999999996 - type: map_at_3 value: 46.61 - type: map_at_5 value: 48.486000000000004 - type: mrr_at_1 value: 43.009 - type: mrr_at_10 value: 52.949999999999996 - type: mrr_at_100 value: 53.618 - type: mrr_at_1000 value: 53.65299999999999 - type: mrr_at_3 value: 50.605999999999995 - type: mrr_at_5 value: 52.095 - type: ndcg_at_1 value: 43.009 - type: ndcg_at_10 value: 55.278000000000006 - type: ndcg_at_100 value: 59.134 - type: ndcg_at_1000 value: 60.528999999999996 - type: ndcg_at_3 value: 50.184 - type: ndcg_at_5 value: 52.919000000000004 - type: precision_at_1 value: 43.009 - type: precision_at_10 value: 8.821 - type: precision_at_100 value: 1.161 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.424 - type: precision_at_5 value: 15.436 - type: recall_at_1 value: 37.584 - type: recall_at_10 value: 68.514 - type: recall_at_100 value: 85.099 - type: recall_at_1000 value: 95.123 - type: recall_at_3 value: 55.007 - type: recall_at_5 value: 61.714999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.7 - type: map_at_10 value: 32.804 - type: map_at_100 value: 33.738 - type: map_at_1000 value: 33.825 - type: map_at_3 value: 30.639 - type: map_at_5 value: 31.781 - type: mrr_at_1 value: 26.328000000000003 - type: mrr_at_10 value: 34.679 - type: mrr_at_100 value: 35.510000000000005 - type: mrr_at_1000 value: 35.577999999999996 - type: mrr_at_3 value: 32.58 - type: mrr_at_5 value: 33.687 - type: ndcg_at_1 value: 26.328000000000003 - type: ndcg_at_10 value: 37.313 - type: ndcg_at_100 value: 42.004000000000005 - type: ndcg_at_1000 value: 44.232 - type: ndcg_at_3 value: 33.076 - type: ndcg_at_5 value: 34.966 - type: precision_at_1 value: 26.328000000000003 - type: precision_at_10 value: 5.627 - type: precision_at_100 value: 0.8410000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 14.011000000000001 - type: precision_at_5 value: 9.582 - type: recall_at_1 value: 24.7 - type: recall_at_10 value: 49.324 - type: recall_at_100 value: 71.018 - type: recall_at_1000 value: 87.905 - type: recall_at_3 value: 37.7 - type: recall_at_5 value: 42.281 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.350999999999999 - type: map_at_10 value: 21.745 - type: map_at_100 value: 22.731 - type: map_at_1000 value: 22.852 - type: map_at_3 value: 19.245 - type: map_at_5 value: 20.788 - type: mrr_at_1 value: 18.159 - type: mrr_at_10 value: 25.833000000000002 - type: mrr_at_100 value: 26.728 - type: mrr_at_1000 value: 26.802 - type: mrr_at_3 value: 23.383000000000003 - type: mrr_at_5 value: 24.887999999999998 - type: ndcg_at_1 value: 18.159 - type: ndcg_at_10 value: 26.518000000000004 - type: ndcg_at_100 value: 31.473000000000003 - type: ndcg_at_1000 value: 34.576 - type: ndcg_at_3 value: 21.907 - type: ndcg_at_5 value: 24.39 - type: precision_at_1 value: 18.159 - type: precision_at_10 value: 4.938 - type: precision_at_100 value: 0.853 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 10.655000000000001 - type: precision_at_5 value: 7.985 - type: recall_at_1 value: 14.350999999999999 - type: recall_at_10 value: 37.284 - type: recall_at_100 value: 59.11300000000001 - type: recall_at_1000 value: 81.634 - type: recall_at_3 value: 24.753 - type: recall_at_5 value: 30.979 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.978 - type: map_at_10 value: 36.276 - type: map_at_100 value: 37.547000000000004 - type: map_at_1000 value: 37.678 - type: map_at_3 value: 33.674 - type: map_at_5 value: 35.119 - type: mrr_at_1 value: 32.916000000000004 - type: mrr_at_10 value: 41.798 - type: mrr_at_100 value: 42.72 - type: mrr_at_1000 value: 42.778 - type: mrr_at_3 value: 39.493 - type: mrr_at_5 value: 40.927 - type: ndcg_at_1 value: 32.916000000000004 - type: ndcg_at_10 value: 41.81 - type: ndcg_at_100 value: 47.284 - type: ndcg_at_1000 value: 49.702 - type: ndcg_at_3 value: 37.486999999999995 - type: ndcg_at_5 value: 39.597 - type: precision_at_1 value: 32.916000000000004 - type: precision_at_10 value: 7.411 - type: precision_at_100 value: 1.189 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.581 - type: precision_at_5 value: 12.397 - type: recall_at_1 value: 26.978 - type: recall_at_10 value: 52.869 - type: recall_at_100 value: 75.78399999999999 - type: recall_at_1000 value: 91.545 - type: recall_at_3 value: 40.717 - type: recall_at_5 value: 46.168 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.641 - type: map_at_10 value: 32.916000000000004 - type: map_at_100 value: 34.165 - type: map_at_1000 value: 34.286 - type: map_at_3 value: 30.335 - type: map_at_5 value: 31.569000000000003 - type: mrr_at_1 value: 30.593999999999998 - type: mrr_at_10 value: 38.448 - type: mrr_at_100 value: 39.299 - type: mrr_at_1000 value: 39.362 - type: mrr_at_3 value: 36.244 - type: mrr_at_5 value: 37.232 - type: ndcg_at_1 value: 30.593999999999998 - type: ndcg_at_10 value: 38.2 - type: ndcg_at_100 value: 43.742 - type: ndcg_at_1000 value: 46.217000000000006 - type: ndcg_at_3 value: 33.925 - type: ndcg_at_5 value: 35.394 - type: precision_at_1 value: 30.593999999999998 - type: precision_at_10 value: 6.895 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.096 - type: precision_at_5 value: 11.05 - type: recall_at_1 value: 24.641 - type: recall_at_10 value: 48.588 - type: recall_at_100 value: 72.841 - type: recall_at_1000 value: 89.535 - type: recall_at_3 value: 36.087 - type: recall_at_5 value: 40.346 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.79425 - type: map_at_10 value: 33.12033333333333 - type: map_at_100 value: 34.221333333333334 - type: map_at_1000 value: 34.3435 - type: map_at_3 value: 30.636583333333338 - type: map_at_5 value: 31.974083333333326 - type: mrr_at_1 value: 29.242416666666664 - type: mrr_at_10 value: 37.11675 - type: mrr_at_100 value: 37.93783333333334 - type: mrr_at_1000 value: 38.003083333333336 - type: mrr_at_3 value: 34.904666666666664 - type: mrr_at_5 value: 36.12916666666667 - type: ndcg_at_1 value: 29.242416666666664 - type: ndcg_at_10 value: 38.03416666666667 - type: ndcg_at_100 value: 42.86674999999999 - type: ndcg_at_1000 value: 45.34550000000001 - type: ndcg_at_3 value: 33.76466666666666 - type: ndcg_at_5 value: 35.668666666666674 - type: precision_at_1 value: 29.242416666666664 - type: precision_at_10 value: 6.589833333333334 - type: precision_at_100 value: 1.0693333333333332 - type: precision_at_1000 value: 0.14641666666666667 - type: precision_at_3 value: 15.430749999999998 - type: precision_at_5 value: 10.833833333333333 - type: recall_at_1 value: 24.79425 - type: recall_at_10 value: 48.582916666666655 - type: recall_at_100 value: 69.88499999999999 - type: recall_at_1000 value: 87.211 - type: recall_at_3 value: 36.625499999999995 - type: recall_at_5 value: 41.553999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.767 - type: map_at_10 value: 28.450999999999997 - type: map_at_100 value: 29.332 - type: map_at_1000 value: 29.426000000000002 - type: map_at_3 value: 26.379 - type: map_at_5 value: 27.584999999999997 - type: mrr_at_1 value: 25.46 - type: mrr_at_10 value: 30.974 - type: mrr_at_100 value: 31.784000000000002 - type: mrr_at_1000 value: 31.857999999999997 - type: mrr_at_3 value: 28.962 - type: mrr_at_5 value: 30.066 - type: ndcg_at_1 value: 25.46 - type: ndcg_at_10 value: 32.041 - type: ndcg_at_100 value: 36.522 - type: ndcg_at_1000 value: 39.101 - type: ndcg_at_3 value: 28.152 - type: ndcg_at_5 value: 30.03 - type: precision_at_1 value: 25.46 - type: precision_at_10 value: 4.893 - type: precision_at_100 value: 0.77 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 11.605 - type: precision_at_5 value: 8.19 - type: recall_at_1 value: 22.767 - type: recall_at_10 value: 40.71 - type: recall_at_100 value: 61.334999999999994 - type: recall_at_1000 value: 80.567 - type: recall_at_3 value: 30.198000000000004 - type: recall_at_5 value: 34.803 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.722 - type: map_at_10 value: 22.794 - type: map_at_100 value: 23.7 - type: map_at_1000 value: 23.822 - type: map_at_3 value: 20.781 - type: map_at_5 value: 22.024 - type: mrr_at_1 value: 20.061999999999998 - type: mrr_at_10 value: 26.346999999999998 - type: mrr_at_100 value: 27.153 - type: mrr_at_1000 value: 27.233 - type: mrr_at_3 value: 24.375 - type: mrr_at_5 value: 25.593 - type: ndcg_at_1 value: 20.061999999999998 - type: ndcg_at_10 value: 26.785999999999998 - type: ndcg_at_100 value: 31.319999999999997 - type: ndcg_at_1000 value: 34.346 - type: ndcg_at_3 value: 23.219 - type: ndcg_at_5 value: 25.107000000000003 - type: precision_at_1 value: 20.061999999999998 - type: precision_at_10 value: 4.78 - type: precision_at_100 value: 0.83 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 10.874 - type: precision_at_5 value: 7.956 - type: recall_at_1 value: 16.722 - type: recall_at_10 value: 35.204 - type: recall_at_100 value: 55.797 - type: recall_at_1000 value: 77.689 - type: recall_at_3 value: 25.245 - type: recall_at_5 value: 30.115 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.842 - type: map_at_10 value: 32.917 - type: map_at_100 value: 33.961000000000006 - type: map_at_1000 value: 34.069 - type: map_at_3 value: 30.595 - type: map_at_5 value: 31.837 - type: mrr_at_1 value: 29.011 - type: mrr_at_10 value: 36.977 - type: mrr_at_100 value: 37.814 - type: mrr_at_1000 value: 37.885999999999996 - type: mrr_at_3 value: 34.966 - type: mrr_at_5 value: 36.043 - type: ndcg_at_1 value: 29.011 - type: ndcg_at_10 value: 37.735 - type: ndcg_at_100 value: 42.683 - type: ndcg_at_1000 value: 45.198 - type: ndcg_at_3 value: 33.650000000000006 - type: ndcg_at_5 value: 35.386 - type: precision_at_1 value: 29.011 - type: precision_at_10 value: 6.259 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 15.329999999999998 - type: precision_at_5 value: 10.541 - type: recall_at_1 value: 24.842 - type: recall_at_10 value: 48.304 - type: recall_at_100 value: 70.04899999999999 - type: recall_at_1000 value: 87.82600000000001 - type: recall_at_3 value: 36.922 - type: recall_at_5 value: 41.449999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.252000000000002 - type: map_at_10 value: 32.293 - type: map_at_100 value: 33.816 - type: map_at_1000 value: 34.053 - type: map_at_3 value: 29.781999999999996 - type: map_at_5 value: 31.008000000000003 - type: mrr_at_1 value: 29.051 - type: mrr_at_10 value: 36.722 - type: mrr_at_100 value: 37.663000000000004 - type: mrr_at_1000 value: 37.734 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 35.609 - type: ndcg_at_1 value: 29.051 - type: ndcg_at_10 value: 37.775999999999996 - type: ndcg_at_100 value: 43.221 - type: ndcg_at_1000 value: 46.116 - type: ndcg_at_3 value: 33.403 - type: ndcg_at_5 value: 35.118 - type: precision_at_1 value: 29.051 - type: precision_at_10 value: 7.332 - type: precision_at_100 value: 1.49 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 15.415000000000001 - type: precision_at_5 value: 11.107 - type: recall_at_1 value: 24.252000000000002 - type: recall_at_10 value: 47.861 - type: recall_at_100 value: 72.21600000000001 - type: recall_at_1000 value: 90.886 - type: recall_at_3 value: 35.533 - type: recall_at_5 value: 39.959 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.025000000000002 - type: map_at_10 value: 27.154 - type: map_at_100 value: 28.118 - type: map_at_1000 value: 28.237000000000002 - type: map_at_3 value: 25.017 - type: map_at_5 value: 25.832 - type: mrr_at_1 value: 21.627 - type: mrr_at_10 value: 28.884999999999998 - type: mrr_at_100 value: 29.741 - type: mrr_at_1000 value: 29.831999999999997 - type: mrr_at_3 value: 26.741 - type: mrr_at_5 value: 27.628000000000004 - type: ndcg_at_1 value: 21.627 - type: ndcg_at_10 value: 31.436999999999998 - type: ndcg_at_100 value: 36.181000000000004 - type: ndcg_at_1000 value: 38.986 - type: ndcg_at_3 value: 27.025 - type: ndcg_at_5 value: 28.436 - type: precision_at_1 value: 21.627 - type: precision_at_10 value: 5.009 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 11.522 - type: precision_at_5 value: 7.763000000000001 - type: recall_at_1 value: 20.025000000000002 - type: recall_at_10 value: 42.954 - type: recall_at_100 value: 64.67500000000001 - type: recall_at_1000 value: 85.301 - type: recall_at_3 value: 30.892999999999997 - type: recall_at_5 value: 34.288000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.079 - type: map_at_10 value: 16.930999999999997 - type: map_at_100 value: 18.398999999999997 - type: map_at_1000 value: 18.561 - type: map_at_3 value: 14.294 - type: map_at_5 value: 15.579 - type: mrr_at_1 value: 22.606 - type: mrr_at_10 value: 32.513 - type: mrr_at_100 value: 33.463 - type: mrr_at_1000 value: 33.513999999999996 - type: mrr_at_3 value: 29.479 - type: mrr_at_5 value: 31.3 - type: ndcg_at_1 value: 22.606 - type: ndcg_at_10 value: 24.053 - type: ndcg_at_100 value: 30.258000000000003 - type: ndcg_at_1000 value: 33.516 - type: ndcg_at_3 value: 19.721 - type: ndcg_at_5 value: 21.144 - type: precision_at_1 value: 22.606 - type: precision_at_10 value: 7.55 - type: precision_at_100 value: 1.399 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 14.701 - type: precision_at_5 value: 11.192 - type: recall_at_1 value: 10.079 - type: recall_at_10 value: 28.970000000000002 - type: recall_at_100 value: 50.805 - type: recall_at_1000 value: 69.378 - type: recall_at_3 value: 18.199 - type: recall_at_5 value: 22.442 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.794 - type: map_at_10 value: 15.165999999999999 - type: map_at_100 value: 20.508000000000003 - type: map_at_1000 value: 21.809 - type: map_at_3 value: 11.568000000000001 - type: map_at_5 value: 13.059000000000001 - type: mrr_at_1 value: 56.49999999999999 - type: mrr_at_10 value: 65.90899999999999 - type: mrr_at_100 value: 66.352 - type: mrr_at_1000 value: 66.369 - type: mrr_at_3 value: 64.0 - type: mrr_at_5 value: 65.10000000000001 - type: ndcg_at_1 value: 44.25 - type: ndcg_at_10 value: 32.649 - type: ndcg_at_100 value: 36.668 - type: ndcg_at_1000 value: 43.918 - type: ndcg_at_3 value: 37.096000000000004 - type: ndcg_at_5 value: 34.048 - type: precision_at_1 value: 56.49999999999999 - type: precision_at_10 value: 25.45 - type: precision_at_100 value: 8.055 - type: precision_at_1000 value: 1.7489999999999999 - type: precision_at_3 value: 41.0 - type: precision_at_5 value: 32.85 - type: recall_at_1 value: 7.794 - type: recall_at_10 value: 20.101 - type: recall_at_100 value: 42.448 - type: recall_at_1000 value: 65.88000000000001 - type: recall_at_3 value: 12.753 - type: recall_at_5 value: 15.307 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.01 - type: f1 value: 38.659680951114964 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 49.713 - type: map_at_10 value: 61.79 - type: map_at_100 value: 62.28 - type: map_at_1000 value: 62.297000000000004 - type: map_at_3 value: 59.361 - type: map_at_5 value: 60.92100000000001 - type: mrr_at_1 value: 53.405 - type: mrr_at_10 value: 65.79899999999999 - type: mrr_at_100 value: 66.219 - type: mrr_at_1000 value: 66.227 - type: mrr_at_3 value: 63.431000000000004 - type: mrr_at_5 value: 64.98 - type: ndcg_at_1 value: 53.405 - type: ndcg_at_10 value: 68.01899999999999 - type: ndcg_at_100 value: 70.197 - type: ndcg_at_1000 value: 70.571 - type: ndcg_at_3 value: 63.352 - type: ndcg_at_5 value: 66.018 - type: precision_at_1 value: 53.405 - type: precision_at_10 value: 9.119 - type: precision_at_100 value: 1.03 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 25.602999999999998 - type: precision_at_5 value: 16.835 - type: recall_at_1 value: 49.713 - type: recall_at_10 value: 83.306 - type: recall_at_100 value: 92.92 - type: recall_at_1000 value: 95.577 - type: recall_at_3 value: 70.798 - type: recall_at_5 value: 77.254 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 15.310000000000002 - type: map_at_10 value: 26.204 - type: map_at_100 value: 27.932000000000002 - type: map_at_1000 value: 28.121000000000002 - type: map_at_3 value: 22.481 - type: map_at_5 value: 24.678 - type: mrr_at_1 value: 29.784 - type: mrr_at_10 value: 39.582 - type: mrr_at_100 value: 40.52 - type: mrr_at_1000 value: 40.568 - type: mrr_at_3 value: 37.114000000000004 - type: mrr_at_5 value: 38.596000000000004 - type: ndcg_at_1 value: 29.784 - type: ndcg_at_10 value: 33.432 - type: ndcg_at_100 value: 40.281 - type: ndcg_at_1000 value: 43.653999999999996 - type: ndcg_at_3 value: 29.612 - type: ndcg_at_5 value: 31.223 - type: precision_at_1 value: 29.784 - type: precision_at_10 value: 9.645 - type: precision_at_100 value: 1.645 - type: precision_at_1000 value: 0.22499999999999998 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 15.401000000000002 - type: recall_at_1 value: 15.310000000000002 - type: recall_at_10 value: 40.499 - type: recall_at_100 value: 66.643 - type: recall_at_1000 value: 87.059 - type: recall_at_3 value: 27.492 - type: recall_at_5 value: 33.748 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 33.599000000000004 - type: map_at_10 value: 47.347 - type: map_at_100 value: 48.191 - type: map_at_1000 value: 48.263 - type: map_at_3 value: 44.698 - type: map_at_5 value: 46.278999999999996 - type: mrr_at_1 value: 67.19800000000001 - type: mrr_at_10 value: 74.054 - type: mrr_at_100 value: 74.376 - type: mrr_at_1000 value: 74.392 - type: mrr_at_3 value: 72.849 - type: mrr_at_5 value: 73.643 - type: ndcg_at_1 value: 67.19800000000001 - type: ndcg_at_10 value: 56.482 - type: ndcg_at_100 value: 59.694 - type: ndcg_at_1000 value: 61.204 - type: ndcg_at_3 value: 52.43299999999999 - type: ndcg_at_5 value: 54.608000000000004 - type: precision_at_1 value: 67.19800000000001 - type: precision_at_10 value: 11.613999999999999 - type: precision_at_100 value: 1.415 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 32.726 - type: precision_at_5 value: 21.349999999999998 - type: recall_at_1 value: 33.599000000000004 - type: recall_at_10 value: 58.069 - type: recall_at_100 value: 70.736 - type: recall_at_1000 value: 80.804 - type: recall_at_3 value: 49.088 - type: recall_at_5 value: 53.376000000000005 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 73.64359999999999 - type: ap value: 67.54685976014599 - type: f1 value: 73.55148707559482 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 19.502 - type: map_at_10 value: 30.816 - type: map_at_100 value: 32.007999999999996 - type: map_at_1000 value: 32.067 - type: map_at_3 value: 27.215 - type: map_at_5 value: 29.304000000000002 - type: mrr_at_1 value: 20.072000000000003 - type: mrr_at_10 value: 31.406 - type: mrr_at_100 value: 32.549 - type: mrr_at_1000 value: 32.602 - type: mrr_at_3 value: 27.839000000000002 - type: mrr_at_5 value: 29.926000000000002 - type: ndcg_at_1 value: 20.086000000000002 - type: ndcg_at_10 value: 37.282 - type: ndcg_at_100 value: 43.206 - type: ndcg_at_1000 value: 44.690000000000005 - type: ndcg_at_3 value: 29.932 - type: ndcg_at_5 value: 33.668 - type: precision_at_1 value: 20.086000000000002 - type: precision_at_10 value: 5.961 - type: precision_at_100 value: 0.898 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 12.856000000000002 - type: precision_at_5 value: 9.596 - type: recall_at_1 value: 19.502 - type: recall_at_10 value: 57.182 - type: recall_at_100 value: 84.952 - type: recall_at_1000 value: 96.34700000000001 - type: recall_at_3 value: 37.193 - type: recall_at_5 value: 46.157 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.96488828089375 - type: f1 value: 93.32119260543482 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.4965800273598 - type: f1 value: 49.34896217536082 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.60928043039678 - type: f1 value: 64.34244712074538 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.75453934095493 - type: f1 value: 68.39224867489249 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.862573504920082 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.511123551196803 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.99145104942086 - type: mrr value: 32.03606480418627 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.015 - type: map_at_10 value: 11.054 - type: map_at_100 value: 13.773 - type: map_at_1000 value: 15.082999999999998 - type: map_at_3 value: 8.253 - type: map_at_5 value: 9.508999999999999 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.44499999999999 - type: mrr_at_100 value: 51.080000000000005 - type: mrr_at_1000 value: 51.129999999999995 - type: mrr_at_3 value: 48.555 - type: mrr_at_5 value: 49.84 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.403000000000002 - type: ndcg_at_100 value: 28.216 - type: ndcg_at_1000 value: 37.021 - type: ndcg_at_3 value: 35.53 - type: ndcg_at_5 value: 33.202999999999996 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.353 - type: precision_at_100 value: 7.266 - type: precision_at_1000 value: 2.011 - type: precision_at_3 value: 32.921 - type: precision_at_5 value: 28.297 - type: recall_at_1 value: 5.015 - type: recall_at_10 value: 14.393 - type: recall_at_100 value: 28.893 - type: recall_at_1000 value: 60.18 - type: recall_at_3 value: 9.184000000000001 - type: recall_at_5 value: 11.39 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 29.524 - type: map_at_10 value: 44.182 - type: map_at_100 value: 45.228 - type: map_at_1000 value: 45.265 - type: map_at_3 value: 39.978 - type: map_at_5 value: 42.482 - type: mrr_at_1 value: 33.256 - type: mrr_at_10 value: 46.661 - type: mrr_at_100 value: 47.47 - type: mrr_at_1000 value: 47.496 - type: mrr_at_3 value: 43.187999999999995 - type: mrr_at_5 value: 45.330999999999996 - type: ndcg_at_1 value: 33.227000000000004 - type: ndcg_at_10 value: 51.589 - type: ndcg_at_100 value: 56.043 - type: ndcg_at_1000 value: 56.937000000000005 - type: ndcg_at_3 value: 43.751 - type: ndcg_at_5 value: 47.937000000000005 - type: precision_at_1 value: 33.227000000000004 - type: precision_at_10 value: 8.556999999999999 - type: precision_at_100 value: 1.103 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.921 - type: precision_at_5 value: 14.396999999999998 - type: recall_at_1 value: 29.524 - type: recall_at_10 value: 71.615 - type: recall_at_100 value: 91.056 - type: recall_at_1000 value: 97.72800000000001 - type: recall_at_3 value: 51.451 - type: recall_at_5 value: 61.119 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.596 - type: map_at_10 value: 83.281 - type: map_at_100 value: 83.952 - type: map_at_1000 value: 83.97200000000001 - type: map_at_3 value: 80.315 - type: map_at_5 value: 82.223 - type: mrr_at_1 value: 80.17 - type: mrr_at_10 value: 86.522 - type: mrr_at_100 value: 86.644 - type: mrr_at_1000 value: 86.64500000000001 - type: mrr_at_3 value: 85.438 - type: mrr_at_5 value: 86.21799999999999 - type: ndcg_at_1 value: 80.19 - type: ndcg_at_10 value: 87.19 - type: ndcg_at_100 value: 88.567 - type: ndcg_at_1000 value: 88.70400000000001 - type: ndcg_at_3 value: 84.17999999999999 - type: ndcg_at_5 value: 85.931 - type: precision_at_1 value: 80.19 - type: precision_at_10 value: 13.209000000000001 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.717 - type: precision_at_5 value: 24.248 - type: recall_at_1 value: 69.596 - type: recall_at_10 value: 94.533 - type: recall_at_100 value: 99.322 - type: recall_at_1000 value: 99.965 - type: recall_at_3 value: 85.911 - type: recall_at_5 value: 90.809 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 49.27650627571912 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 57.08550946534183 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.568 - type: map_at_10 value: 10.862 - type: map_at_100 value: 12.757 - type: map_at_1000 value: 13.031 - type: map_at_3 value: 7.960000000000001 - type: map_at_5 value: 9.337 - type: mrr_at_1 value: 22.5 - type: mrr_at_10 value: 32.6 - type: mrr_at_100 value: 33.603 - type: mrr_at_1000 value: 33.672000000000004 - type: mrr_at_3 value: 29.299999999999997 - type: mrr_at_5 value: 31.25 - type: ndcg_at_1 value: 22.5 - type: ndcg_at_10 value: 18.605 - type: ndcg_at_100 value: 26.029999999999998 - type: ndcg_at_1000 value: 31.256 - type: ndcg_at_3 value: 17.873 - type: ndcg_at_5 value: 15.511 - type: precision_at_1 value: 22.5 - type: precision_at_10 value: 9.58 - type: precision_at_100 value: 2.033 - type: precision_at_1000 value: 0.33 - type: precision_at_3 value: 16.633 - type: precision_at_5 value: 13.54 - type: recall_at_1 value: 4.568 - type: recall_at_10 value: 19.402 - type: recall_at_100 value: 41.277 - type: recall_at_1000 value: 66.963 - type: recall_at_3 value: 10.112 - type: recall_at_5 value: 13.712 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.31992291680787 - type: cos_sim_spearman value: 76.7212346922664 - type: euclidean_pearson value: 80.42189271706478 - type: euclidean_spearman value: 76.7212342532493 - type: manhattan_pearson value: 80.33171093031578 - type: manhattan_spearman value: 76.63192883074694 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.16654278886763 - type: cos_sim_spearman value: 73.66390263429565 - type: euclidean_pearson value: 79.7485360086639 - type: euclidean_spearman value: 73.66389870373436 - type: manhattan_pearson value: 79.73652237443706 - type: manhattan_spearman value: 73.65296117151647 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.40389689929246 - type: cos_sim_spearman value: 83.29727595993955 - type: euclidean_pearson value: 82.23970587854079 - type: euclidean_spearman value: 83.29727595993955 - type: manhattan_pearson value: 82.18823600831897 - type: manhattan_spearman value: 83.20746192209594 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.73505246913413 - type: cos_sim_spearman value: 79.1686548248754 - type: euclidean_pearson value: 80.48889135993412 - type: euclidean_spearman value: 79.16864112930354 - type: manhattan_pearson value: 80.40720651057302 - type: manhattan_spearman value: 79.0640155089286 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.3953512879065 - type: cos_sim_spearman value: 87.29947322714338 - type: euclidean_pearson value: 86.59759438529645 - type: euclidean_spearman value: 87.29947511092824 - type: manhattan_pearson value: 86.52097806169155 - type: manhattan_spearman value: 87.22987242146534 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.48565753792056 - type: cos_sim_spearman value: 83.6049720319893 - type: euclidean_pearson value: 82.56452023172913 - type: euclidean_spearman value: 83.60490168191697 - type: manhattan_pearson value: 82.58079941137872 - type: manhattan_spearman value: 83.60975807374051 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.18239976618212 - type: cos_sim_spearman value: 88.23061724730616 - type: euclidean_pearson value: 87.78482472776658 - type: euclidean_spearman value: 88.23061724730616 - type: manhattan_pearson value: 87.75059641730239 - type: manhattan_spearman value: 88.22527413524622 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.42816418706765 - type: cos_sim_spearman value: 63.4569864520124 - type: euclidean_pearson value: 64.35405409953853 - type: euclidean_spearman value: 63.4569864520124 - type: manhattan_pearson value: 63.96649236073056 - type: manhattan_spearman value: 63.01448583722708 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 83.41659638047614 - type: cos_sim_spearman value: 84.03893866106175 - type: euclidean_pearson value: 84.2251203953798 - type: euclidean_spearman value: 84.03893866106175 - type: manhattan_pearson value: 84.22733643205514 - type: manhattan_spearman value: 84.06504411263612 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.75608022582414 - type: mrr value: 94.0947732369301 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 50.161 - type: map_at_10 value: 59.458999999999996 - type: map_at_100 value: 60.156 - type: map_at_1000 value: 60.194 - type: map_at_3 value: 56.45400000000001 - type: map_at_5 value: 58.165 - type: mrr_at_1 value: 53.333 - type: mrr_at_10 value: 61.050000000000004 - type: mrr_at_100 value: 61.586 - type: mrr_at_1000 value: 61.624 - type: mrr_at_3 value: 58.889 - type: mrr_at_5 value: 60.122 - type: ndcg_at_1 value: 53.333 - type: ndcg_at_10 value: 63.888999999999996 - type: ndcg_at_100 value: 66.963 - type: ndcg_at_1000 value: 68.062 - type: ndcg_at_3 value: 59.01 - type: ndcg_at_5 value: 61.373999999999995 - type: precision_at_1 value: 53.333 - type: precision_at_10 value: 8.633000000000001 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 23.111 - type: precision_at_5 value: 15.467 - type: recall_at_1 value: 50.161 - type: recall_at_10 value: 75.922 - type: recall_at_100 value: 90.0 - type: recall_at_1000 value: 98.667 - type: recall_at_3 value: 62.90599999999999 - type: recall_at_5 value: 68.828 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81188118811882 - type: cos_sim_ap value: 95.11619225962413 - type: cos_sim_f1 value: 90.35840484603736 - type: cos_sim_precision value: 91.23343527013252 - type: cos_sim_recall value: 89.5 - type: dot_accuracy value: 99.81188118811882 - type: dot_ap value: 95.11619225962413 - type: dot_f1 value: 90.35840484603736 - type: dot_precision value: 91.23343527013252 - type: dot_recall value: 89.5 - type: euclidean_accuracy value: 99.81188118811882 - type: euclidean_ap value: 95.11619225962413 - type: euclidean_f1 value: 90.35840484603736 - type: euclidean_precision value: 91.23343527013252 - type: euclidean_recall value: 89.5 - type: manhattan_accuracy value: 99.80891089108911 - type: manhattan_ap value: 95.07294266220966 - type: manhattan_f1 value: 90.21794221996959 - type: manhattan_precision value: 91.46968139773895 - type: manhattan_recall value: 89.0 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 95.11619225962413 - type: max_f1 value: 90.35840484603736 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.3481874105239 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.421291695525 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.98746633276634 - type: mrr value: 50.63143249724133 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.009961979844036 - type: cos_sim_spearman value: 30.558416108881044 - type: dot_pearson value: 31.009964941134253 - type: dot_spearman value: 30.545760761761393 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.207 - type: map_at_10 value: 1.6 - type: map_at_100 value: 8.594 - type: map_at_1000 value: 20.213 - type: map_at_3 value: 0.585 - type: map_at_5 value: 0.9039999999999999 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 87.4 - type: mrr_at_100 value: 87.4 - type: mrr_at_1000 value: 87.4 - type: mrr_at_3 value: 86.667 - type: mrr_at_5 value: 87.06700000000001 - type: ndcg_at_1 value: 73.0 - type: ndcg_at_10 value: 65.18 - type: ndcg_at_100 value: 49.631 - type: ndcg_at_1000 value: 43.498999999999995 - type: ndcg_at_3 value: 71.83800000000001 - type: ndcg_at_5 value: 69.271 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 69.19999999999999 - type: precision_at_100 value: 50.980000000000004 - type: precision_at_1000 value: 19.426 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.207 - type: recall_at_10 value: 1.822 - type: recall_at_100 value: 11.849 - type: recall_at_1000 value: 40.492 - type: recall_at_3 value: 0.622 - type: recall_at_5 value: 0.9809999999999999 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.001 - type: map_at_10 value: 10.376000000000001 - type: map_at_100 value: 16.936999999999998 - type: map_at_1000 value: 18.615000000000002 - type: map_at_3 value: 5.335999999999999 - type: map_at_5 value: 7.374 - type: mrr_at_1 value: 20.408 - type: mrr_at_10 value: 38.29 - type: mrr_at_100 value: 39.33 - type: mrr_at_1000 value: 39.347 - type: mrr_at_3 value: 32.993 - type: mrr_at_5 value: 36.973 - type: ndcg_at_1 value: 17.347 - type: ndcg_at_10 value: 23.515 - type: ndcg_at_100 value: 37.457 - type: ndcg_at_1000 value: 49.439 - type: ndcg_at_3 value: 22.762999999999998 - type: ndcg_at_5 value: 22.622 - type: precision_at_1 value: 20.408 - type: precision_at_10 value: 22.448999999999998 - type: precision_at_100 value: 8.184 - type: precision_at_1000 value: 1.608 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 25.306 - type: recall_at_1 value: 2.001 - type: recall_at_10 value: 17.422 - type: recall_at_100 value: 51.532999999999994 - type: recall_at_1000 value: 87.466 - type: recall_at_3 value: 6.861000000000001 - type: recall_at_5 value: 10.502 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.54419999999999 - type: ap value: 14.372170450843907 - type: f1 value: 54.94420257390529 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.402942840973395 - type: f1 value: 59.4166538875571 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.569064336457906 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.31322644096085 - type: cos_sim_ap value: 72.14518894837381 - type: cos_sim_f1 value: 66.67489813557229 - type: cos_sim_precision value: 62.65954977953121 - type: cos_sim_recall value: 71.2401055408971 - type: dot_accuracy value: 85.31322644096085 - type: dot_ap value: 72.14521480685293 - type: dot_f1 value: 66.67489813557229 - type: dot_precision value: 62.65954977953121 - type: dot_recall value: 71.2401055408971 - type: euclidean_accuracy value: 85.31322644096085 - type: euclidean_ap value: 72.14520820485349 - type: euclidean_f1 value: 66.67489813557229 - type: euclidean_precision value: 62.65954977953121 - type: euclidean_recall value: 71.2401055408971 - type: manhattan_accuracy value: 85.21785778148656 - type: manhattan_ap value: 72.01177147657364 - type: manhattan_f1 value: 66.62594673833374 - type: manhattan_precision value: 62.0336669699727 - type: manhattan_recall value: 71.95250659630607 - type: max_accuracy value: 85.31322644096085 - type: max_ap value: 72.14521480685293 - type: max_f1 value: 66.67489813557229 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.12756626693057 - type: cos_sim_ap value: 86.05430786440826 - type: cos_sim_f1 value: 78.27759692216631 - type: cos_sim_precision value: 75.33466248931929 - type: cos_sim_recall value: 81.45980905451185 - type: dot_accuracy value: 89.12950673341872 - type: dot_ap value: 86.05431161145492 - type: dot_f1 value: 78.27759692216631 - type: dot_precision value: 75.33466248931929 - type: dot_recall value: 81.45980905451185 - type: euclidean_accuracy value: 89.12756626693057 - type: euclidean_ap value: 86.05431303247397 - type: euclidean_f1 value: 78.27759692216631 - type: euclidean_precision value: 75.33466248931929 - type: euclidean_recall value: 81.45980905451185 - type: manhattan_accuracy value: 89.04994760740482 - type: manhattan_ap value: 86.00860610892074 - type: manhattan_f1 value: 78.1846776005392 - type: manhattan_precision value: 76.10438839480975 - type: manhattan_recall value: 80.3818909762858 - type: max_accuracy value: 89.12950673341872 - type: max_ap value: 86.05431303247397 - type: max_f1 value: 78.27759692216631 --- <!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-small-en` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-small-en` is an English, monolingual **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. The backbone `jina-bert-v2-small-en` is pretrained on the C4 dataset. The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi. This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc. This model has 33 million parameters, which enables lightning-fast and memory efficient inference, while still delivering impressive performance. Additionally, we provide the following embedding models: - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters **(you are here)**. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon). ## Data & Parameters Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923) ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-small-en') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from transformers package. ```python !pip install transformers from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-small-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` The latest sentence-transformers also supports Jina embeddings: ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "jinaai/jina-embeddings-v2-small-en", # switch to en/zh for English or Chinese trust_remote_code=True ) # control your input sequence length up to 8192 model.max_seq_length = 1024 embeddings = model.encode([ 'How is the weather today?', 'What is the current weather like today?' ]) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Alternatives to Using Transformers Package 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## RAG Performance According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Plans 1. Bilingual embedding models supporting more European & Asian languages, including Spanish, French, Italian and Japanese. 2. Multimodal embedding models enable Multimodal RAG applications. 3. High-performt rerankers. ## Trouble Shooting **Loading of Model Code failed** If you forgot to pass the `trust_remote_code=True` flag when calling `AutoModel.from_pretrained` or initializing the model via the `SentenceTransformer` class, you will receive an error that the model weights could not be initialized. This is caused by tranformers falling back to creating a default BERT model, instead of a jina-embedding model: ```bash Some weights of the model checkpoint at jinaai/jina-embeddings-v2-base-en were not used when initializing BertModel: ['encoder.layer.2.mlp.layernorm.weight', 'encoder.layer.3.mlp.layernorm.weight', 'encoder.layer.10.mlp.wo.bias', 'encoder.layer.5.mlp.wo.bias', 'encoder.layer.2.mlp.layernorm.bias', 'encoder.layer.1.mlp.gated_layers.weight', 'encoder.layer.5.mlp.gated_layers.weight', 'encoder.layer.8.mlp.layernorm.bias', ... ``` ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @misc{günther2023jina, title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents}, author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao}, year={2023}, eprint={2310.19923}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Qwen/Qwen1.5-1.8B
Qwen
"2024-04-05T10:39:41Z"
336,999
43
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "arxiv:2309.16609", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-22T16:53:32Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Qwen1.5-1.8B ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in Chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
Qwen/Qwen1.5-0.5B-Chat
Qwen
"2024-04-30T07:19:52Z"
333,748
74
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "arxiv:2309.16609", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-31T07:08:48Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-0.5B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-0.5B-Chat-GPTQ-Int4`, `Qwen1.5-0.5B-Chat-GPTQ-Int8`, `Qwen1.5-0.5B-Chat-AWQ`, and `Qwen1.5-0.5B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
OrionStarAI/Orion-14B-Base
OrionStarAI
"2024-03-26T09:21:52Z"
332,018
76
transformers
[ "transformers", "pytorch", "orion", "text-generation", "code", "model", "llm", "custom_code", "en", "zh", "ja", "ko", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-16T06:07:42Z"
--- language: - en - zh - ja - ko metrics: - accuracy pipeline_tag: text-generation tags: - code - model - llm --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" /> </div> <div align="center"> <h1> Orion-14B </h1> </div> <div align="center"> <div align="center"> <b>🌐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Base/blob/main/README_zh.md" target="_blank">🇨🇳中文</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Base/blob/main/README_ja.md" target="_blank">🇯🇵日本語</a> |<a href="https://huggingface.co/OrionStarAI/Orion-14B-Base/blob/main/README_ko.md" target="_blank">🇰🇷한국어</a> </div> <h4 align="center"> <p> 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>😺 <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>📖 <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a> <p> </h4> </div> # Table of Contents - [📖 Model Introduction](#model-introduction) - [🔗 Model Download](#model-download) - [🔖 Model Benchmark](#model-benchmark) - [📊 Model Inference](#model-inference) [<img src="./assets/imgs/vllm_1.png" alt="vllm" style="margin: 0;display: initial;" height="20" />](#vllm) [<img src="./assets/imgs/llama_cpp_1.png" alt="llamacpp" style="margin: 0;display: initial;" height="20" />](#llama-cpp) - [📜 Declarations & License](#declarations-license) - [🥇 Company Introduction](#company-introduction) <a name="model-introduction"></a><br> # 1. Model Introduction - Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf). - The Orion-14B series models exhibit the following features: - Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations. - Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets. - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests. - The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%. <table style="border-collapse: collapse; width: 100%;"> <tr> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;"> </td> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;"> </td> </tr> </table> - Orion-14B series models including: - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens. - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community. - **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks. - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system. - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%. - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights. <a name="model-download"></a><br> # 2. Model Download Model release and download links are provided in the table below: | Model Name | HuggingFace Download Links | ModelScope Download Links | |-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) | | 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) | | 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) | | 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) | | 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) | | 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) | | 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) | <a name="model-benchmark"></a><br> # 3. Model Benchmarks ## 3.1. Base Model Orion-14B-Base Benchmarks ### 3.1.1. LLM evaluation results on examination and professional knowledge | Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 | | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 | | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 | | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 | | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 | | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** | ### 3.1.2. LLM evaluation results on language understanding and common knowledge | Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 | | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 | | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 | | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 | | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 | | **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** | ### 3.1.3. LLM evaluation results of OpenCompass testsets | Model | Average | Examination | Language | Knowledge | Understanding | Reasoning | |------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 | | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 | | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 | | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 | | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 | |**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** | ### 3.1.4. Comparison of LLM performances on Japanese testsets | Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM | |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 | | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 | | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 | | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 | | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 | | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 | | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 | | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 | | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** | ### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation |Model | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5| |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------| | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 | | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 | | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 | | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 | | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 | | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 | |**Orion-14B-Chat**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**| ### 3.1.6. Multilingual evaluation | Model | Train Lang | Japanese | Korean | Chinese | English | |--------------------|------------|----------|----------|----------|----------| | PLaMo-13B | En,Jp | 52.3 | * | * | * | | Weblab-10B | En,Jp | 50.7 | * | * | * | | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * | | StableLM-jp-7B | En,Jp | 51.1 | * | * | * | | KoGPT-6B | En,Ko | * | 70.1 | * | * | | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * | | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 | | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 | | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 | | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** | | **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 | ## 3.2. Chat Model Orion-14B-Chat Benchmarks ### 3.2.1. Chat model subjective evaluation of MTBench | Model | First-Turn | Second-Turn | **Average** | |----------------------|----------|----------|----------| | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 | | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 | | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 | | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 | | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** | \* use vllm for inference ### 3.2.2. Chat model subjective evaluation of AlignBench | Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**| |--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 | | Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**| | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 | | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 | | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 | \* use vllm for inference ## 3.3. LongChat Model Orion-14B-LongChat Benchmarks ### 3.3.1. LongChat evaluation of LongBench | Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P| |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 | | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 | | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 | | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** | | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 | ## 3.4. Chat RAG Model Benchmarks ### 3.4.1. LLM evaluation results of self-built RAG testsets |Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction| |---------------------|------|------|------|------|------|------| | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 | | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 | | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 | | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 | | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 | \* means manual assessment ## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks ### 3.5.1. LLM evaluation results of self-built plugin testsets |Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition | |-----------------------|--------|-----------|--------| | Baichuan2-13B-Chat | 25 | 0 | 0 | | Qwen-14B-Chat | 55 | 0 | 50 | | GPT-4 | **95** | 52.38 | 70 | | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** | ## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks ### 3.6.1. Comparison of before and after quantization |Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag| |-------------------------|-------|-----|------|------|------|------|------| | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 | | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 | <a name="model-inference"></a><br> # 4. Model Inference Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link is available in the table at the beginning of this document. We demonstrate various inference methods here, and the program will automatically download the necessary resources from Hugging Face. ## 4.1. Python Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B") messages = [{"role": "user", "content": "Hello, what is your name? "}] response = model.chat(tokenizer, messages, streaming=False) print(response) ``` In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1). ## 4.2. Command Line Tool ```shell CUDA_VISIBLE_DEVICES=0 python cli_demo.py ``` This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model. ## 4.3. Direct Script Inference ```shell # base model CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello # chat model CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi ``` <a name="vllm"></a><br> ## 4.4. Inference by vllm - Project URL<br> https://github.com/vllm-project/vllm - Pull Request<br> https://github.com/vllm-project/vllm/pull/2539 <a name="llama-cpp"></a><br> ## 4.5. Inference by llama.cpp - Project URL<br> https://github.com/ggerganov/llama.cpp - Pull Request<br> https://github.com/ggerganov/llama.cpp/pull/5118 - How to convert to GGUF model ```shell python convert-hf-to-gguf.py path/to/Orion-14B-Chat --outfile chat.gguf ``` - How to run generation ```shell ./main --frequency-penalty 0.5 --frequency-penalty 0.5 --top-k 5 --top-p 0.9 -m chat.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e ``` ## 4.6 Example Output ### 4.6.1. Casual Chat ````` User: Hello Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. User: What functions do you have? Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to: Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more. Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases. Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants. Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me. Translating languages: I can translate text from one language to another. Chatting: I can engage in conversation with users and respond to their queries in a conversational manner. Summarizing long texts: I can summarize longer texts like articles, books, and research papers. Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria. Playing games: I can play simple games like Hangman, Trivia, and more. Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences. These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it. User: Tell me a joke. Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything. ````` ### 4.6.2. Japanese & Korean Chat ````` User:自己を紹介してください Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。 User:자기소개를 해주세요. Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다. ````` <a name="declarations-license"></a><br> # 5. Declarations, License ## 5.1. Declarations We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law. Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing. We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any problems arise due to the use of the Orion-14B open-source model, including but not limited to data security issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility. ## 5.2. License Community use of the Orion-14B series models - For code, please comply with [Apache License Version 2.0](./LICENSE)<br> - For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement) <a name="company-introduction"></a><br> # 6. Company Introduction OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology, OrionStar aims to make society and the world a better place. OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and visual navigation. It integrates product development capabilities and technological application capabilities. Based on the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini, Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for Truly Useful Robots", OrionStar empowers more people through AI technology. **The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods. Companies with demands for deploying large-scale model applications are welcome to contact us.<br> **Enquiry Hotline: 400-898-7779**<br> **E-mail: ai@orionstar.com**<br> **Discord Link: https://discord.gg/zumjDWgdAs** <div align="center"> <img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" /> </div>
google/siglip-base-patch16-224
google
"2024-09-26T08:20:18Z"
331,149
25
transformers
[ "transformers", "pytorch", "safetensors", "siglip", "zero-shot-image-classification", "vision", "arxiv:2303.15343", "arxiv:2209.06794", "license:apache-2.0", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2023-09-30T18:22:03Z"
--- license: apache-2.0 tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # SigLIP (base-sized model) SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in [this repository](https://github.com/google-research/big_vision). Disclaimer: The team releasing SigLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes. A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713). ## Intended uses & limitations You can use the raw model for tasks like zero-shot image classification and image-text retrieval. See the [model hub](https://huggingface.co/models?search=google/siglip) to look for other versions on a task that interests you. ### How to use Here is how to use this model to perform zero-shot image classification: ```python from PIL import Image import requests from transformers import AutoProcessor, AutoModel import torch model = AutoModel.from_pretrained("google/siglip-base-patch16-224") processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-224") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a photo of 2 cats", "a photo of 2 dogs"] inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) # these are the probabilities print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") ``` Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user: ```python from transformers import pipeline from PIL import Image import requests # load pipe image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-224") # load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # inference outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"]) outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs] print(outputs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/siglip.html#). ## Training procedure ### Training data SigLIP is pre-trained on the English image-text pairs of the WebLI dataset [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794). ### Preprocessing Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Texts are tokenized and padded to the same length (64 tokens). ### Compute The model was trained on 16 TPU-v4 chips for three days. ## Evaluation results Evaluation of SigLIP compared to CLIP is shown below (taken from the paper). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg" alt="drawing" width="600"/> ### BibTeX entry and citation info ```bibtex @misc{zhai2023sigmoid, title={Sigmoid Loss for Language Image Pre-Training}, author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer}, year={2023}, eprint={2303.15343}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
ncbi/MedCPT-Query-Encoder
ncbi
"2023-12-03T00:45:30Z"
330,901
27
transformers
[ "transformers", "pytorch", "safetensors", "bert", "feature-extraction", "arxiv:2307.00589", "license:other", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-10-24T22:57:48Z"
--- license: other license_name: public-domain license_link: LICENSE --- # MedCPT Introduction **MedCPT generates embeddings of biomedical texts that can be used for semantic search (dense retrieval)**. The model contains two encoders: - [MedCPT Query Encoder](https://huggingface.co/ncbi/MedCPT-Query-Encoder): compute the embeddings of short texts (e.g., questions, search queries, sentences). - [MedCPT Article Encoder](https://huggingface.co/ncbi/MedCPT-Article-Encoder): compute the embeddings of articles (e.g., PubMed titles & abstracts). **This repo contains the MedCPT Query Encoder.** **MedCPT has been pre-trained by an unprecedented scale of 255M query-article pairs from PubMed search logs**, and has been shown to achieve state-of-the-art performance on several zero-shot biomedical IR datasets. In general, there are three use cases: 1. Query-to-article search with both encoders. 2. Query representation for clustering or query-to-query search with the [query encoder](https://huggingface.co/ncbi/MedCPT-Query-Encoder). 3. Article representation for clustering or article-to-article search with the [article encoder](https://huggingface.co/ncbi/MedCPT-Article-Encoder). For more details, please check out our [paper](https://arxiv.org/abs/2307.00589) (Bioinformatics, 2023). Please note that the released version is slightly different from the version reported in the paper. # Case 1. Using the MedCPT Query Encoder ```python import torch from transformers import AutoTokenizer, AutoModel model = AutoModel.from_pretrained("ncbi/MedCPT-Query-Encoder") tokenizer = AutoTokenizer.from_pretrained("ncbi/MedCPT-Query-Encoder") queries = [ "diabetes treatment", "How to treat diabetes?", "A 45-year-old man presents with increased thirst and frequent urination over the past 3 months.", ] with torch.no_grad(): # tokenize the queries encoded = tokenizer( queries, truncation=True, padding=True, return_tensors='pt', max_length=64, ) # encode the queries (use the [CLS] last hidden states as the representations) embeds = model(**encoded).last_hidden_state[:, 0, :] print(embeds) print(embeds.size()) ``` The output will be: ```bash tensor([[ 0.0413, 0.0084, -0.0491, ..., -0.4963, -0.3830, -0.3593], [ 0.0801, 0.1193, -0.0905, ..., -0.5380, -0.5059, -0.2944], [-0.3412, 0.1521, -0.0946, ..., 0.0952, 0.1660, -0.0902]]) torch.Size([3, 768]) ``` These embeddings are also in the same space as those generated by the MedCPT article encoder. # Case 2. Semantically searching PubMed with your query We have provided the embeddings of all PubMed articles generated by the MedCPT article encoder at https://ftp.ncbi.nlm.nih.gov/pub/lu/MedCPT/pubmed_embeddings/. You can simply download these embeddings to search PubMed with your query. # Acknowledgments This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of Medicine. # Disclaimer This tool shows the results of research conducted in the Computational Biology Branch, NCBI/NLM. The information produced on this website is not intended for direct diagnostic use or medical decision-making without review and oversight by a clinical professional. Individuals should not change their health behavior solely on the basis of information produced on this website. NIH does not independently verify the validity or utility of the information produced by this tool. If you have questions about the information produced on this website, please see a health care professional. More information about NCBI's disclaimer policy is available. # Citation If you find this repo helpful, please cite MedCPT by: ```bibtext @article{jin2023medcpt, title={MedCPT: Contrastive Pre-trained Transformers with large-scale PubMed search logs for zero-shot biomedical information retrieval}, author={Jin, Qiao and Kim, Won and Chen, Qingyu and Comeau, Donald C and Yeganova, Lana and Wilbur, W John and Lu, Zhiyong}, journal={Bioinformatics}, volume={39}, number={11}, pages={btad651}, year={2023}, publisher={Oxford University Press} } ```
timm/mobilenetv2_100.ra_in1k
timm
"2023-04-27T21:14:13Z"
330,684
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:1801.04381", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:00:26Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for mobilenetv2_100.ra_in1k A MobileNet-v2 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.5 - GMACs: 0.3 - Activations (M): 6.7 - Image size: 224 x 224 - **Papers:** - MobileNetV2: Inverted Residuals and Linear Bottlenecks: https://arxiv.org/abs/1801.04381 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilenetv2_100.ra_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv2_100.ra_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 24, 56, 56]) # torch.Size([1, 32, 28, 28]) # torch.Size([1, 96, 14, 14]) # torch.Size([1, 320, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv2_100.ra_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{sandler2018mobilenetv2, title={Mobilenetv2: Inverted residuals and linear bottlenecks}, author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={4510--4520}, year={2018} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
nvidia/parakeet-rnnt-0.6b
nvidia
"2024-01-03T01:10:24Z"
330,198
6
nemo
[ "nemo", "automatic-speech-recognition", "speech", "audio", "Transducer", "FastConformer", "Conformer", "pytorch", "NeMo", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "dataset:fisher_corpus", "dataset:Switchboard-1", "dataset:WSJ-0", "dataset:WSJ-1", "dataset:National-Singapore-Corpus-Part-1", "dataset:National-Singapore-Corpus-Part-6", "dataset:vctk", "dataset:voxpopuli", "dataset:europarl", "dataset:multilingual_librispeech", "dataset:mozilla-foundation/common_voice_8_0", "dataset:MLCommons/peoples_speech", "arxiv:2305.05084", "license:cc-by-4.0", "model-index", "region:us" ]
automatic-speech-recognition
"2023-12-28T15:36:35Z"
--- language: - en library_name: nemo datasets: - librispeech_asr - fisher_corpus - Switchboard-1 - WSJ-0 - WSJ-1 - National-Singapore-Corpus-Part-1 - National-Singapore-Corpus-Part-6 - vctk - voxpopuli - europarl - multilingual_librispeech - mozilla-foundation/common_voice_8_0 - MLCommons/peoples_speech thumbnail: null tags: - automatic-speech-recognition - speech - audio - Transducer - FastConformer - Conformer - pytorch - NeMo - hf-asr-leaderboard license: cc-by-4.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: parakeet-rnnt-0.6b results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: AMI (Meetings test) type: edinburghcstr/ami config: ihm split: test args: language: en metrics: - name: Test WER type: wer value: 17.55 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Earnings-22 type: revdotcom/earnings22 split: test args: language: en metrics: - name: Test WER type: wer value: 14.78 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: GigaSpeech type: speechcolab/gigaspeech split: test args: language: en metrics: - name: Test WER type: wer value: 10.07 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 1.63 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.06 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: SPGI Speech type: kensho/spgispeech config: test split: test args: language: en metrics: - name: Test WER type: wer value: 3.47 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: tedlium-v3 type: LIUM/tedlium config: release1 split: test args: language: en metrics: - name: Test WER type: wer value: 3.86 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Vox Populi type: facebook/voxpopuli config: en split: test args: language: en metrics: - name: Test WER type: wer value: 6.05 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 9.0 type: mozilla-foundation/common_voice_9_0 config: en split: test args: language: en metrics: - name: Test WER type: wer value: 8.07 metrics: - wer pipeline_tag: automatic-speech-recognition --- # Parakeet RNNT 0.6B (en) <style> img { display: inline; } </style> [![Model architecture](https://img.shields.io/badge/Model_Arch-FastConformer--Transducer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-0.6B-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-en-lightgrey#model-badge)](#datasets) `parakeet-rnnt-0.6b` is an ASR model that transcribes speech in lower case English alphabet. This model is jointly developed by [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) and [Suno.ai](https://www.suno.ai/) teams. It is an XL version of FastConformer Transducer [1] (around 600M parameters) model. See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained(model_name="nvidia/parakeet-rnnt-0.6b") ``` ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/parakeet-rnnt-0.6b" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` ### Input This model accepts 16000 Hz mono-channel audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with a Transducer decoder (RNNT) loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer). ## Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ### Datasets The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams. The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hour subset - Mozilla Common Voice (v7.0) - People's Speech - 12,000 hour subset ## Performance The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general. The following tables summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. |**Version**|**Tokenizer**|**Vocabulary Size**|**AMI**|**Earnings-22**|**Giga Speech**|**LS test-clean**|**SPGI Speech**|**TEDLIUM-v3**|**Vox Populi**|**Common Voice**| |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------| | 1.22.0 | SentencePiece Unigram | 1024 | 17.55 | 14.78 | 10.07 | 1.63 | 3.06 | 3.47 | 3.86 | 6.05 | 8.07 | These are greedy WER numbers without external LM. More details on evaluation can be found at [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard) ## NVIDIA Riva: Deployment [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support. Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References [1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084) [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) [4] [Suno.ai](https://suno.ai/) [5] [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard) ## Licence License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
fxmarty/tiny-random-GemmaForCausalLM
fxmarty
"2024-02-23T17:06:41Z"
329,722
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-23T14:44:32Z"
--- license: mit --- This one with a custom `config.head_dim` as allowed by the architecture (see 7b model).
prajjwal1/bert-mini
prajjwal1
"2021-10-27T18:27:38Z"
328,729
20
transformers
[ "transformers", "pytorch", "BERT", "MNLI", "NLI", "transformer", "pre-training", "en", "arxiv:1908.08962", "arxiv:2110.01518", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
reazon-research/reazonspeech-nemo-v2
reazon-research
"2024-02-13T16:32:26Z"
327,768
22
nemo
[ "nemo", "automatic-speech-recognition", "NeMo", "ja", "arxiv:2305.05084", "arxiv:2004.05150", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
"2024-01-30T01:49:11Z"
--- license: apache-2.0 language: - ja library_name: nemo tags: - automatic-speech-recognition - NeMo --- # reazonspeech-nemo-v2 `reazonspeech-nemo-v2` is an automatic speech recognition model trained on [ReazonSpeech v2.0 corpus](https://huggingface.co/datasets/reazon-research/reazonspeech). This model supports inference of long-form Japanese audio clips up to several hours. ## Model Architecture The model features an improved Conformer architecture from [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084). * Subword-based RNN-T model. The total parameter count is 619M. * Encoder uses [Longformer](https://arxiv.org/abs/2004.05150) attention with local context size of 256, and has a single global token. * Decoder has a vocabulary space of 3000 tokens constructed by [SentencePiece](https://github.com/google/sentencepiece) unigram tokenizer. We trained this model for 1 million steps using AdamW optimizer following Noam annealing schedule. ## Usage We recommend to use this model through our [reazonspeech](https://github.com/reazon-research/reazonspeech) library. ``` from reazonspeech.nemo.asr import load_model, transcribe, audio_from_path audio = audio_from_path("speech.wav") model = load_model() ret = transcribe(model, audio) print(ret.text) ``` ## License [Apaceh Licence 2.0](https://choosealicense.com/licenses/apache-2.0/)
google/flan-t5-small
google
"2023-10-10T18:01:54Z"
327,494
272
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "arxiv:2210.11416", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-10-21T09:59:24Z"
--- language: - en - fr - ro - de - multilingual tags: - text2text-generation widget: - text: "Translate to German: My name is Arthur" example_title: "Translation" - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" example_title: "Question Answering" - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." example_title: "Logical reasoning" - text: "Please answer the following question. What is the boiling point of Nitrogen?" example_title: "Scientific knowledge" - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" example_title: "Yes/no question" - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" example_title: "Reasoning task" - text: "Q: ( False or not False or False ) is? A: Let's think step by step" example_title: "Boolean Expressions" - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" example_title: "Math reasoning" - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" example_title: "Premise and hypothesis" datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 --- # Model Card for FLAN-T5 small <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2210.11416.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5) # Usage Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> # Uses ## Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. ## Ethical considerations and risks > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. ## Known Limitations > Flan-T5 has not been tested in real world applications. ## Sensitive Use: > Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech. # Training Details ## Training Data The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2): ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png) ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf). ## Results For full results for FLAN-T5-Small, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11416, doi = {10.48550/ARXIV.2210.11416}, url = {https://arxiv.org/abs/2210.11416}, author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Scaling Instruction-Finetuned Language Models}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
openai/whisper-medium.en
openai
"2024-01-22T17:55:36Z"
327,337
47
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "arxiv:2212.04356", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-09-26T07:02:02Z"
--- language: - en tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: whisper-medium.en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 4.120542365210176 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 7.431640255663553 pipeline_tag: automatic-speech-recognition license: apache-2.0 --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech recognition or speech translation is possible through use of a multilingual checkpoint. To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) ## Transcription ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en") >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ## Evaluation This code snippet shows how to evaluate Whisper medium.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium.en").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.0154449620004904 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-medium.en", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
fxmarty/tiny-dummy-qwen2
fxmarty
"2024-03-20T07:18:24Z"
326,889
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-19T11:07:14Z"
--- license: mit ---
kykim/bert-kor-base
kykim
"2021-05-19T21:17:13Z"
326,314
26
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: ko --- # Bert base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import BertTokenizerFast, BertModel tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base") model_bert = BertModel.from_pretrained("kykim/bert-kor-base") ```
TheBloke/Mistral-7B-Instruct-v0.1-AWQ
TheBloke
"2023-11-09T18:17:58Z"
325,697
37
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2023-09-27T19:29:11Z"
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 inference: false license: apache-2.0 model_creator: Mistral AI model_name: Mistral 7B Instruct v0.1 model_type: mistral pipeline_tag: text-generation prompt_template: '<s>[INST] {prompt} [/INST] ' quantized_by: TheBloke tags: - finetuned --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Instruct v0.1 - AWQ - Model creator: [Mistral AI](https://huggingface.co/mistralai) - Original model: [Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- description start --> ## Description This repo contains AWQ model files for [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. ### Mistral AWQs These are experimental first AWQs for the brand-new model format, Mistral. As of September 29th 2023, they are only supported by AutoAWQ (version 0.1.1+) <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF) * [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: - Transformers from [commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79](https://github.com/huggingface/transformers/commit/72958fcd3c98a7afdc61f953aa58c544ebda2f79) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from [commit 1c5ccc791fa2cb0697db3b4070df1813f1736208](https://github.com/casper-hansen/AutoAWQ/commit/1c5ccc791fa2cb0697db3b4070df1813f1736208). ```shell pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79 pip3 install git+https://github.com/casper-hansen/AutoAWQ.git@1c5ccc791fa2cb0697db3b4070df1813f1736208 ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Mistral-7B-v0.1-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) """ # Inference should be possible with transformers pipeline as well in future # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023) from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) """ ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mistral AI's Mistral 7B Instruct v0.1 # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False) model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
meta-llama/Llama-3.2-3B
meta-llama
"2024-10-24T15:07:40Z"
323,965
309
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-18T15:23:48Z"
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B --include "original/*" --local-dir Llama-3.2-3B ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
sentence-transformers/paraphrase-albert-small-v2
sentence-transformers
"2024-11-05T18:20:00Z"
323,111
7
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "rust", "onnx", "safetensors", "openvino", "albert", "feature-extraction", "sentence-similarity", "transformers", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:s2orc", "dataset:ms_marco", "dataset:wiki_atomic_edits", "dataset:snli", "dataset:multi_nli", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/coco_captions", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/QQP", "dataset:yahoo_answers_topics", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - flax-sentence-embeddings/stackexchange_xml - s2orc - ms_marco - wiki_atomic_edits - snli - multi_nli - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/flickr30k-captions - embedding-data/coco_captions - embedding-data/sentence-compression - embedding-data/QQP - yahoo_answers_topics pipeline_tag: sentence-similarity --- # sentence-transformers/paraphrase-albert-small-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Gustavosta/MagicPrompt-Stable-Diffusion
Gustavosta
"2023-07-09T22:10:48Z"
321,480
704
transformers
[ "transformers", "pytorch", "coreml", "safetensors", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-09-17T22:34:07Z"
--- license: mit --- # MagicPrompt - Stable Diffusion This is a model from the MagicPrompt series of models, which are [GPT-2](https://huggingface.co/gpt2) models intended to generate prompt texts for imaging AIs, in this case: [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion). ## 🖼️ Here's an example: <img src="https://files.catbox.moe/ac3jq7.png"> This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare, but if you want to take a look at the original dataset, you can have a look here: [datasets/Gustavosta/Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts). If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)". ## 💻 You can see other MagicPrompt models: - For Dall-E 2: [Gustavosta/MagicPrompt-Dalle](https://huggingface.co/Gustavosta/MagicPrompt-Dalle) - For Midjourney: [Gustavosta/MagicPrompt-Midourney](https://huggingface.co/Gustavosta/MagicPrompt-Midjourney) **[⚠️ In progress]** - MagicPrompt full: [Gustavosta/MagicPrompt](https://huggingface.co/Gustavosta/MagicPrompt) **[⚠️ In progress]** ## ⚖️ Licence: [MIT](https://huggingface.co/models?license=license:mit) When using this model, please credit: [Gustavosta](https://huggingface.co/Gustavosta) **Thanks for reading this far! :)**
Systran/faster-whisper-small
Systran
"2023-11-23T10:57:09Z"
320,965
11
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
"2023-11-23T09:53:51Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper small model for CTranslate2 This repository contains the conversion of [openai/whisper-small](https://huggingface.co/openai/whisper-small) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("small") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-small --output_dir faster-whisper-small \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small).**
mlc-ai/Qwen2.5-0.5B-Instruct-q4f16_1-MLC
mlc-ai
"2024-09-18T21:10:07Z"
320,490
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct", "region:us" ]
null
"2024-09-18T19:12:17Z"
--- library_name: mlc-llm base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - mlc-llm - web-llm --- # Qwen2.5-0.5B-Instruct-q4f16_1-MLC This is the [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) model in MLC format `q4f16_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Qwen2.5-0.5B-Instruct-q4f16_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Qwen2.5-0.5B-Instruct-q4f16_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Qwen2.5-0.5B-Instruct-q4f16_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
CompVis/ldm-super-resolution-4x-openimages
CompVis
"2023-07-05T16:18:48Z"
319,495
106
diffusers
[ "diffusers", "pytorch", "super-resolution", "diffusion-super-resolution", "arxiv:2112.10752", "license:apache-2.0", "diffusers:LDMSuperResolutionPipeline", "region:us" ]
null
"2022-11-09T12:35:04Z"
--- license: apache-2.0 tags: - pytorch - diffusers - super-resolution - diffusion-super-resolution --- # Latent Diffusion Models (LDM) for super-resolution **Paper**: [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) **Abstract**: *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.* **Authors** *Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer* ## Usage ### Inference with a pipeline ```python !pip install git+https://github.com/huggingface/diffusers.git import requests from PIL import Image from io import BytesIO from diffusers import LDMSuperResolutionPipeline import torch device = "cuda" if torch.cuda.is_available() else "cpu" model_id = "CompVis/ldm-super-resolution-4x-openimages" # load model and scheduler pipeline = LDMSuperResolutionPipeline.from_pretrained(model_id) pipeline = pipeline.to(device) # let's download an image url = "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" response = requests.get(url) low_res_img = Image.open(BytesIO(response.content)).convert("RGB") low_res_img = low_res_img.resize((128, 128)) # run pipeline in inference (sample random noise and denoise) upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] # save image upscaled_image.save("ldm_generated_image.png") ```
Babelscape/wikineural-multilingual-ner
Babelscape
"2023-05-23T08:47:23Z"
318,325
130
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "named-entity-recognition", "sequence-tagger-model", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "multilingual", "dataset:Babelscape/wikineural", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- annotations_creators: - machine-generated language_creators: - machine-generated widget: - text: My name is Wolfgang and I live in Berlin. - text: George Washington went to Washington. - text: Mi nombre es Sarah y vivo en Londres. - text: Меня зовут Симона, и я живу в Риме. tags: - named-entity-recognition - sequence-tagger-model datasets: - Babelscape/wikineural language: - de - en - es - fr - it - nl - pl - pt - ru - multilingual license: - cc-by-nc-sa-4.0 pretty_name: wikineural-dataset source_datasets: - original task_categories: - structure-prediction task_ids: - named-entity-recognition --- # WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER This is the model card for the EMNLP 2021 paper [WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER](https://aclanthology.org/2021.findings-emnlp.215/). We fine-tuned a multilingual language model (mBERT) for 3 epochs on our [WikiNEuRal dataset](https://huggingface.co/datasets/Babelscape/wikineural) for Named Entity Recognition (NER). The resulting multilingual NER model supports the 9 languages covered by WikiNEuRal (de, en, es, fr, it, nl, pl, pt, ru), and it was trained on all 9 languages jointly. **If you use the model, please reference this work in your paper**: ```bibtex @inproceedings{tedeschi-etal-2021-wikineural-combined, title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}", author = "Tedeschi, Simone and Maiorca, Valentino and Campolungo, Niccol{\`o} and Cecconi, Francesco and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.215", pages = "2521--2533", abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.", } ``` The original repository for the paper can be found at [https://github.com/Babelscape/wikineural](https://github.com/Babelscape/wikineural). ## How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Babelscape/wikineural-multilingual-ner") model = AutoModelForTokenClassification.from_pretrained("Babelscape/wikineural-multilingual-ner") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Wolfgang and I live in Berlin" ner_results = nlp(example) print(ner_results) ``` ## Limitations and bias This model is trained on WikiNEuRal, a state-of-the-art dataset for Multilingual NER automatically derived from Wikipedia. Therefore, it might not generalize well to all textual genres (e.g. news). On the other hand, models trained only on news articles (e.g. only on CoNLL03) have been proven to obtain much lower scores on encyclopedic articles. To obtain more robust systems, we encourage you to train a system on the combination of WikiNEuRal with other datasets (e.g. WikiNEuRal + CoNLL). ## Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents and models belongs to the original copyright holders.
trl-internal-testing/T5ForConditionalGeneration-correct-vocab-calibrated
trl-internal-testing
"2024-06-18T08:48:53Z"
318,314
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-18T08:37:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unslothai/aws
unslothai
"2024-07-06T22:41:24Z"
318,132
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-03-31T16:44:21Z"
--- {} --- We log statistics to see if any envs are breaking
blanchefort/rubert-base-cased-sentiment
blanchefort
"2023-04-06T04:06:36Z"
318,090
13
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - ru tags: - sentiment - text-classification --- # RuBERT for Sentiment Analysis Short Russian texts sentiment classification This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on aggregated corpus of 351.797 texts. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Datasets used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116. **[RuReviews](https://github.com/sismetanin/rureviews)** > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian. **[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)** > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. **[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)** > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
Snowflake/snowflake-arctic-embed-m
Snowflake
"2024-08-20T18:13:09Z"
313,185
134
sentence-transformers
[ "sentence-transformers", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "arctic", "snowflake-arctic-embed", "transformers.js", "arxiv:2407.18887", "arxiv:2405.05374", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-11T11:07:56Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - arctic - snowflake-arctic-embed - transformers.js model-index: - name: snowflake-arctic-embed-m results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.80597014925374 - type: ap value: 39.31198155789558 - type: f1 value: 70.48198448222148 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 82.831525 - type: ap value: 77.4474050181638 - type: f1 value: 82.77204845110204 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.93000000000001 - type: f1 value: 37.98013371053459 - task: type: Retrieval dataset: type: mteb/arguana name: MTEB ArguAna config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 31.223 - type: map_at_10 value: 47.43 - type: map_at_100 value: 48.208 - type: map_at_1000 value: 48.211 - type: map_at_3 value: 42.579 - type: map_at_5 value: 45.263999999999996 - type: mrr_at_1 value: 31.65 - type: mrr_at_10 value: 47.573 - type: mrr_at_100 value: 48.359 - type: mrr_at_1000 value: 48.362 - type: mrr_at_3 value: 42.734 - type: mrr_at_5 value: 45.415 - type: ndcg_at_1 value: 31.223 - type: ndcg_at_10 value: 56.436 - type: ndcg_at_100 value: 59.657000000000004 - type: ndcg_at_1000 value: 59.731 - type: ndcg_at_3 value: 46.327 - type: ndcg_at_5 value: 51.178000000000004 - type: precision_at_1 value: 31.223 - type: precision_at_10 value: 8.527999999999999 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.061 - type: precision_at_5 value: 13.797999999999998 - type: recall_at_1 value: 31.223 - type: recall_at_10 value: 85.277 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 57.18299999999999 - type: recall_at_5 value: 68.99 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.23625429411296 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 37.433880471403654 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.53175025582013 - type: mrr value: 74.51160796728664 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.93746103286769 - type: cos_sim_spearman value: 86.62245567912619 - type: euclidean_pearson value: 87.154173907501 - type: euclidean_spearman value: 86.62245567912619 - type: manhattan_pearson value: 87.17682026633462 - type: manhattan_spearman value: 86.74775973908348 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 80.33766233766232 - type: f1 value: 79.64931422442245 - task: type: Clustering dataset: type: jinaai/big-patent-clustering name: MTEB BigPatentClustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 19.116028913890613 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.966921852810174 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.98019698537654 - task: type: Retrieval dataset: type: mteb/cqadupstack-android name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 34.079 - type: map_at_10 value: 46.35 - type: map_at_100 value: 47.785 - type: map_at_1000 value: 47.903 - type: map_at_3 value: 42.620999999999995 - type: map_at_5 value: 44.765 - type: mrr_at_1 value: 41.345 - type: mrr_at_10 value: 52.032000000000004 - type: mrr_at_100 value: 52.690000000000005 - type: mrr_at_1000 value: 52.727999999999994 - type: mrr_at_3 value: 49.428 - type: mrr_at_5 value: 51.093999999999994 - type: ndcg_at_1 value: 41.345 - type: ndcg_at_10 value: 53.027 - type: ndcg_at_100 value: 57.962 - type: ndcg_at_1000 value: 59.611999999999995 - type: ndcg_at_3 value: 47.687000000000005 - type: ndcg_at_5 value: 50.367 - type: precision_at_1 value: 41.345 - type: precision_at_10 value: 10.157 - type: precision_at_100 value: 1.567 - type: precision_at_1000 value: 0.199 - type: precision_at_3 value: 23.081 - type: precision_at_5 value: 16.738 - type: recall_at_1 value: 34.079 - type: recall_at_10 value: 65.93900000000001 - type: recall_at_100 value: 86.42699999999999 - type: recall_at_1000 value: 96.61 - type: recall_at_3 value: 50.56699999999999 - type: recall_at_5 value: 57.82000000000001 - task: type: Retrieval dataset: type: mteb/cqadupstack-english name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 33.289 - type: map_at_10 value: 43.681 - type: map_at_100 value: 45.056000000000004 - type: map_at_1000 value: 45.171 - type: map_at_3 value: 40.702 - type: map_at_5 value: 42.292 - type: mrr_at_1 value: 41.146 - type: mrr_at_10 value: 49.604 - type: mrr_at_100 value: 50.28399999999999 - type: mrr_at_1000 value: 50.322 - type: mrr_at_3 value: 47.611 - type: mrr_at_5 value: 48.717 - type: ndcg_at_1 value: 41.146 - type: ndcg_at_10 value: 49.43 - type: ndcg_at_100 value: 54.01899999999999 - type: ndcg_at_1000 value: 55.803000000000004 - type: ndcg_at_3 value: 45.503 - type: ndcg_at_5 value: 47.198 - type: precision_at_1 value: 41.146 - type: precision_at_10 value: 9.268 - type: precision_at_100 value: 1.4749999999999999 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 21.932 - type: precision_at_5 value: 15.389 - type: recall_at_1 value: 33.289 - type: recall_at_10 value: 59.209999999999994 - type: recall_at_100 value: 78.676 - type: recall_at_1000 value: 89.84100000000001 - type: recall_at_3 value: 47.351 - type: recall_at_5 value: 52.178999999999995 - task: type: Retrieval dataset: type: mteb/cqadupstack-gaming name: MTEB CQADupstackGamingRetrieval config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 44.483 - type: map_at_10 value: 56.862 - type: map_at_100 value: 57.901 - type: map_at_1000 value: 57.948 - type: map_at_3 value: 53.737 - type: map_at_5 value: 55.64 - type: mrr_at_1 value: 50.658 - type: mrr_at_10 value: 60.281 - type: mrr_at_100 value: 60.946 - type: mrr_at_1000 value: 60.967000000000006 - type: mrr_at_3 value: 58.192 - type: mrr_at_5 value: 59.531 - type: ndcg_at_1 value: 50.658 - type: ndcg_at_10 value: 62.339 - type: ndcg_at_100 value: 66.28399999999999 - type: ndcg_at_1000 value: 67.166 - type: ndcg_at_3 value: 57.458 - type: ndcg_at_5 value: 60.112 - type: precision_at_1 value: 50.658 - type: precision_at_10 value: 9.762 - type: precision_at_100 value: 1.26 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 25.329 - type: precision_at_5 value: 17.254 - type: recall_at_1 value: 44.483 - type: recall_at_10 value: 74.819 - type: recall_at_100 value: 91.702 - type: recall_at_1000 value: 97.84 - type: recall_at_3 value: 62.13999999999999 - type: recall_at_5 value: 68.569 - task: type: Retrieval dataset: type: mteb/cqadupstack-gis name: MTEB CQADupstackGisRetrieval config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 26.489 - type: map_at_10 value: 37.004999999999995 - type: map_at_100 value: 38.001000000000005 - type: map_at_1000 value: 38.085 - type: map_at_3 value: 34.239999999999995 - type: map_at_5 value: 35.934 - type: mrr_at_1 value: 28.362 - type: mrr_at_10 value: 38.807 - type: mrr_at_100 value: 39.671 - type: mrr_at_1000 value: 39.736 - type: mrr_at_3 value: 36.29 - type: mrr_at_5 value: 37.906 - type: ndcg_at_1 value: 28.362 - type: ndcg_at_10 value: 42.510999999999996 - type: ndcg_at_100 value: 47.226 - type: ndcg_at_1000 value: 49.226 - type: ndcg_at_3 value: 37.295 - type: ndcg_at_5 value: 40.165 - type: precision_at_1 value: 28.362 - type: precision_at_10 value: 6.633 - type: precision_at_100 value: 0.9490000000000001 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 16.234 - type: precision_at_5 value: 11.434999999999999 - type: recall_at_1 value: 26.489 - type: recall_at_10 value: 57.457 - type: recall_at_100 value: 78.712 - type: recall_at_1000 value: 93.565 - type: recall_at_3 value: 43.748 - type: recall_at_5 value: 50.589 - task: type: Retrieval dataset: type: mteb/cqadupstack-mathematica name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 12.418999999999999 - type: map_at_10 value: 22.866 - type: map_at_100 value: 24.365000000000002 - type: map_at_1000 value: 24.479 - type: map_at_3 value: 19.965 - type: map_at_5 value: 21.684 - type: mrr_at_1 value: 14.677000000000001 - type: mrr_at_10 value: 26.316 - type: mrr_at_100 value: 27.514 - type: mrr_at_1000 value: 27.57 - type: mrr_at_3 value: 23.3 - type: mrr_at_5 value: 25.191000000000003 - type: ndcg_at_1 value: 14.677000000000001 - type: ndcg_at_10 value: 28.875 - type: ndcg_at_100 value: 35.607 - type: ndcg_at_1000 value: 38.237 - type: ndcg_at_3 value: 23.284 - type: ndcg_at_5 value: 26.226 - type: precision_at_1 value: 14.677000000000001 - type: precision_at_10 value: 5.771 - type: precision_at_100 value: 1.058 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 11.940000000000001 - type: precision_at_5 value: 9.229 - type: recall_at_1 value: 12.418999999999999 - type: recall_at_10 value: 43.333 - type: recall_at_100 value: 71.942 - type: recall_at_1000 value: 90.67399999999999 - type: recall_at_3 value: 28.787000000000003 - type: recall_at_5 value: 35.638 - task: type: Retrieval dataset: type: mteb/cqadupstack-physics name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 31.686999999999998 - type: map_at_10 value: 42.331 - type: map_at_100 value: 43.655 - type: map_at_1000 value: 43.771 - type: map_at_3 value: 38.944 - type: map_at_5 value: 40.991 - type: mrr_at_1 value: 37.921 - type: mrr_at_10 value: 47.534 - type: mrr_at_100 value: 48.362 - type: mrr_at_1000 value: 48.405 - type: mrr_at_3 value: 44.995000000000005 - type: mrr_at_5 value: 46.617 - type: ndcg_at_1 value: 37.921 - type: ndcg_at_10 value: 48.236000000000004 - type: ndcg_at_100 value: 53.705000000000005 - type: ndcg_at_1000 value: 55.596000000000004 - type: ndcg_at_3 value: 43.11 - type: ndcg_at_5 value: 45.862 - type: precision_at_1 value: 37.921 - type: precision_at_10 value: 8.643 - type: precision_at_100 value: 1.336 - type: precision_at_1000 value: 0.166 - type: precision_at_3 value: 20.308 - type: precision_at_5 value: 14.514 - type: recall_at_1 value: 31.686999999999998 - type: recall_at_10 value: 60.126999999999995 - type: recall_at_100 value: 83.10600000000001 - type: recall_at_1000 value: 95.15 - type: recall_at_3 value: 46.098 - type: recall_at_5 value: 53.179 - task: type: Retrieval dataset: type: mteb/cqadupstack-programmers name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.686 - type: map_at_10 value: 39.146 - type: map_at_100 value: 40.543 - type: map_at_1000 value: 40.644999999999996 - type: map_at_3 value: 36.195 - type: map_at_5 value: 37.919000000000004 - type: mrr_at_1 value: 35.160000000000004 - type: mrr_at_10 value: 44.711 - type: mrr_at_100 value: 45.609 - type: mrr_at_1000 value: 45.655 - type: mrr_at_3 value: 42.409 - type: mrr_at_5 value: 43.779 - type: ndcg_at_1 value: 35.160000000000004 - type: ndcg_at_10 value: 44.977000000000004 - type: ndcg_at_100 value: 50.663000000000004 - type: ndcg_at_1000 value: 52.794 - type: ndcg_at_3 value: 40.532000000000004 - type: ndcg_at_5 value: 42.641 - type: precision_at_1 value: 35.160000000000004 - type: precision_at_10 value: 8.014000000000001 - type: precision_at_100 value: 1.269 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 19.444 - type: precision_at_5 value: 13.653 - type: recall_at_1 value: 28.686 - type: recall_at_10 value: 56.801 - type: recall_at_100 value: 80.559 - type: recall_at_1000 value: 95.052 - type: recall_at_3 value: 43.675999999999995 - type: recall_at_5 value: 49.703 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.173833333333338 - type: map_at_10 value: 38.202083333333334 - type: map_at_100 value: 39.47475 - type: map_at_1000 value: 39.586499999999994 - type: map_at_3 value: 35.17308333333334 - type: map_at_5 value: 36.914 - type: mrr_at_1 value: 32.92958333333333 - type: mrr_at_10 value: 42.16758333333333 - type: mrr_at_100 value: 43.04108333333333 - type: mrr_at_1000 value: 43.092499999999994 - type: mrr_at_3 value: 39.69166666666666 - type: mrr_at_5 value: 41.19458333333333 - type: ndcg_at_1 value: 32.92958333333333 - type: ndcg_at_10 value: 43.80583333333333 - type: ndcg_at_100 value: 49.060916666666664 - type: ndcg_at_1000 value: 51.127250000000004 - type: ndcg_at_3 value: 38.80383333333333 - type: ndcg_at_5 value: 41.29658333333333 - type: precision_at_1 value: 32.92958333333333 - type: precision_at_10 value: 7.655666666666666 - type: precision_at_100 value: 1.2094166666666668 - type: precision_at_1000 value: 0.15750000000000003 - type: precision_at_3 value: 17.87975 - type: precision_at_5 value: 12.741833333333332 - type: recall_at_1 value: 28.173833333333338 - type: recall_at_10 value: 56.219249999999995 - type: recall_at_100 value: 79.01416666666665 - type: recall_at_1000 value: 93.13425000000001 - type: recall_at_3 value: 42.39241666666667 - type: recall_at_5 value: 48.764833333333335 - task: type: Retrieval dataset: type: mteb/cqadupstack-stats name: MTEB CQADupstackStatsRetrieval config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 25.625999999999998 - type: map_at_10 value: 32.808 - type: map_at_100 value: 33.951 - type: map_at_1000 value: 34.052 - type: map_at_3 value: 30.536 - type: map_at_5 value: 31.77 - type: mrr_at_1 value: 28.374 - type: mrr_at_10 value: 35.527 - type: mrr_at_100 value: 36.451 - type: mrr_at_1000 value: 36.522 - type: mrr_at_3 value: 33.410000000000004 - type: mrr_at_5 value: 34.537 - type: ndcg_at_1 value: 28.374 - type: ndcg_at_10 value: 37.172 - type: ndcg_at_100 value: 42.474000000000004 - type: ndcg_at_1000 value: 44.853 - type: ndcg_at_3 value: 32.931 - type: ndcg_at_5 value: 34.882999999999996 - type: precision_at_1 value: 28.374 - type: precision_at_10 value: 5.813 - type: precision_at_100 value: 0.928 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 14.008000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.625999999999998 - type: recall_at_10 value: 47.812 - type: recall_at_100 value: 71.61800000000001 - type: recall_at_1000 value: 88.881 - type: recall_at_3 value: 35.876999999999995 - type: recall_at_5 value: 40.839 - task: type: Retrieval dataset: type: mteb/cqadupstack-tex name: MTEB CQADupstackTexRetrieval config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.233 - type: map_at_10 value: 26.375999999999998 - type: map_at_100 value: 27.575 - type: map_at_1000 value: 27.706999999999997 - type: map_at_3 value: 23.619 - type: map_at_5 value: 25.217 - type: mrr_at_1 value: 22.023 - type: mrr_at_10 value: 30.122 - type: mrr_at_100 value: 31.083 - type: mrr_at_1000 value: 31.163999999999998 - type: mrr_at_3 value: 27.541 - type: mrr_at_5 value: 29.061999999999998 - type: ndcg_at_1 value: 22.023 - type: ndcg_at_10 value: 31.476 - type: ndcg_at_100 value: 37.114000000000004 - type: ndcg_at_1000 value: 39.981 - type: ndcg_at_3 value: 26.538 - type: ndcg_at_5 value: 29.016 - type: precision_at_1 value: 22.023 - type: precision_at_10 value: 5.819 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 12.583 - type: precision_at_5 value: 9.36 - type: recall_at_1 value: 18.233 - type: recall_at_10 value: 43.029 - type: recall_at_100 value: 68.253 - type: recall_at_1000 value: 88.319 - type: recall_at_3 value: 29.541 - type: recall_at_5 value: 35.783 - task: type: Retrieval dataset: type: mteb/cqadupstack-unix name: MTEB CQADupstackUnixRetrieval config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 28.923 - type: map_at_10 value: 39.231 - type: map_at_100 value: 40.483000000000004 - type: map_at_1000 value: 40.575 - type: map_at_3 value: 35.94 - type: map_at_5 value: 37.683 - type: mrr_at_1 value: 33.955 - type: mrr_at_10 value: 43.163000000000004 - type: mrr_at_100 value: 44.054 - type: mrr_at_1000 value: 44.099 - type: mrr_at_3 value: 40.361000000000004 - type: mrr_at_5 value: 41.905 - type: ndcg_at_1 value: 33.955 - type: ndcg_at_10 value: 45.068000000000005 - type: ndcg_at_100 value: 50.470000000000006 - type: ndcg_at_1000 value: 52.349000000000004 - type: ndcg_at_3 value: 39.298 - type: ndcg_at_5 value: 41.821999999999996 - type: precision_at_1 value: 33.955 - type: precision_at_10 value: 7.649 - type: precision_at_100 value: 1.173 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 17.817 - type: precision_at_5 value: 12.537 - type: recall_at_1 value: 28.923 - type: recall_at_10 value: 58.934 - type: recall_at_100 value: 81.809 - type: recall_at_1000 value: 94.71300000000001 - type: recall_at_3 value: 42.975 - type: recall_at_5 value: 49.501 - task: type: Retrieval dataset: type: mteb/cqadupstack-webmasters name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 28.596 - type: map_at_10 value: 38.735 - type: map_at_100 value: 40.264 - type: map_at_1000 value: 40.48 - type: map_at_3 value: 35.394999999999996 - type: map_at_5 value: 37.099 - type: mrr_at_1 value: 33.992 - type: mrr_at_10 value: 43.076 - type: mrr_at_100 value: 44.005 - type: mrr_at_1000 value: 44.043 - type: mrr_at_3 value: 40.415 - type: mrr_at_5 value: 41.957 - type: ndcg_at_1 value: 33.992 - type: ndcg_at_10 value: 44.896 - type: ndcg_at_100 value: 50.44499999999999 - type: ndcg_at_1000 value: 52.675000000000004 - type: ndcg_at_3 value: 39.783 - type: ndcg_at_5 value: 41.997 - type: precision_at_1 value: 33.992 - type: precision_at_10 value: 8.498 - type: precision_at_100 value: 1.585 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 13.241 - type: recall_at_1 value: 28.596 - type: recall_at_10 value: 56.885 - type: recall_at_100 value: 82.306 - type: recall_at_1000 value: 95.813 - type: recall_at_3 value: 42.168 - type: recall_at_5 value: 48.32 - task: type: Retrieval dataset: type: mteb/cqadupstack-wordpress name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 25.576 - type: map_at_10 value: 33.034 - type: map_at_100 value: 34.117999999999995 - type: map_at_1000 value: 34.222 - type: map_at_3 value: 30.183 - type: map_at_5 value: 31.974000000000004 - type: mrr_at_1 value: 27.542 - type: mrr_at_10 value: 34.838 - type: mrr_at_100 value: 35.824 - type: mrr_at_1000 value: 35.899 - type: mrr_at_3 value: 32.348 - type: mrr_at_5 value: 34.039 - type: ndcg_at_1 value: 27.542 - type: ndcg_at_10 value: 37.663000000000004 - type: ndcg_at_100 value: 42.762 - type: ndcg_at_1000 value: 45.235 - type: ndcg_at_3 value: 32.227 - type: ndcg_at_5 value: 35.27 - type: precision_at_1 value: 27.542 - type: precision_at_10 value: 5.840999999999999 - type: precision_at_100 value: 0.895 - type: precision_at_1000 value: 0.123 - type: precision_at_3 value: 13.370000000000001 - type: precision_at_5 value: 9.797 - type: recall_at_1 value: 25.576 - type: recall_at_10 value: 50.285000000000004 - type: recall_at_100 value: 73.06 - type: recall_at_1000 value: 91.15299999999999 - type: recall_at_3 value: 35.781 - type: recall_at_5 value: 43.058 - task: type: Retrieval dataset: type: mteb/climate-fever name: MTEB ClimateFEVER config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 17.061 - type: map_at_10 value: 29.464000000000002 - type: map_at_100 value: 31.552999999999997 - type: map_at_1000 value: 31.707 - type: map_at_3 value: 24.834999999999997 - type: map_at_5 value: 27.355 - type: mrr_at_1 value: 38.958 - type: mrr_at_10 value: 51.578 - type: mrr_at_100 value: 52.262 - type: mrr_at_1000 value: 52.283 - type: mrr_at_3 value: 48.599 - type: mrr_at_5 value: 50.404 - type: ndcg_at_1 value: 38.958 - type: ndcg_at_10 value: 39.367999999999995 - type: ndcg_at_100 value: 46.521 - type: ndcg_at_1000 value: 49.086999999999996 - type: ndcg_at_3 value: 33.442 - type: ndcg_at_5 value: 35.515 - type: precision_at_1 value: 38.958 - type: precision_at_10 value: 12.110999999999999 - type: precision_at_100 value: 1.982 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 25.102999999999998 - type: precision_at_5 value: 18.971 - type: recall_at_1 value: 17.061 - type: recall_at_10 value: 45.198 - type: recall_at_100 value: 69.18900000000001 - type: recall_at_1000 value: 83.38499999999999 - type: recall_at_3 value: 30.241 - type: recall_at_5 value: 36.851 - task: type: Retrieval dataset: type: mteb/dbpedia name: MTEB DBPedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.398 - type: map_at_10 value: 21.421 - type: map_at_100 value: 31.649 - type: map_at_1000 value: 33.469 - type: map_at_3 value: 15.310000000000002 - type: map_at_5 value: 17.946 - type: mrr_at_1 value: 71 - type: mrr_at_10 value: 78.92099999999999 - type: mrr_at_100 value: 79.225 - type: mrr_at_1000 value: 79.23 - type: mrr_at_3 value: 77.792 - type: mrr_at_5 value: 78.467 - type: ndcg_at_1 value: 57.99999999999999 - type: ndcg_at_10 value: 44.733000000000004 - type: ndcg_at_100 value: 50.646 - type: ndcg_at_1000 value: 57.903999999999996 - type: ndcg_at_3 value: 49.175999999999995 - type: ndcg_at_5 value: 46.800999999999995 - type: precision_at_1 value: 71 - type: precision_at_10 value: 36.25 - type: precision_at_100 value: 12.135 - type: precision_at_1000 value: 2.26 - type: precision_at_3 value: 52.75 - type: precision_at_5 value: 45.65 - type: recall_at_1 value: 9.398 - type: recall_at_10 value: 26.596999999999998 - type: recall_at_100 value: 57.943 - type: recall_at_1000 value: 81.147 - type: recall_at_3 value: 16.634 - type: recall_at_5 value: 20.7 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.535000000000004 - type: f1 value: 42.53702746452163 - task: type: Retrieval dataset: type: mteb/fever name: MTEB FEVER config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 77.235 - type: map_at_10 value: 85.504 - type: map_at_100 value: 85.707 - type: map_at_1000 value: 85.718 - type: map_at_3 value: 84.425 - type: map_at_5 value: 85.13 - type: mrr_at_1 value: 83.363 - type: mrr_at_10 value: 89.916 - type: mrr_at_100 value: 89.955 - type: mrr_at_1000 value: 89.956 - type: mrr_at_3 value: 89.32600000000001 - type: mrr_at_5 value: 89.79 - type: ndcg_at_1 value: 83.363 - type: ndcg_at_10 value: 89.015 - type: ndcg_at_100 value: 89.649 - type: ndcg_at_1000 value: 89.825 - type: ndcg_at_3 value: 87.45100000000001 - type: ndcg_at_5 value: 88.39399999999999 - type: precision_at_1 value: 83.363 - type: precision_at_10 value: 10.659 - type: precision_at_100 value: 1.122 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 33.338 - type: precision_at_5 value: 20.671999999999997 - type: recall_at_1 value: 77.235 - type: recall_at_10 value: 95.389 - type: recall_at_100 value: 97.722 - type: recall_at_1000 value: 98.744 - type: recall_at_3 value: 91.19800000000001 - type: recall_at_5 value: 93.635 - task: type: Retrieval dataset: type: mteb/fiqa name: MTEB FiQA2018 config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 20.835 - type: map_at_10 value: 34.459 - type: map_at_100 value: 36.335 - type: map_at_1000 value: 36.518 - type: map_at_3 value: 30.581000000000003 - type: map_at_5 value: 32.859 - type: mrr_at_1 value: 40.894999999999996 - type: mrr_at_10 value: 50.491 - type: mrr_at_100 value: 51.243 - type: mrr_at_1000 value: 51.286 - type: mrr_at_3 value: 47.994 - type: mrr_at_5 value: 49.429 - type: ndcg_at_1 value: 40.894999999999996 - type: ndcg_at_10 value: 42.403 - type: ndcg_at_100 value: 48.954 - type: ndcg_at_1000 value: 51.961 - type: ndcg_at_3 value: 39.11 - type: ndcg_at_5 value: 40.152 - type: precision_at_1 value: 40.894999999999996 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.833 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 25.874000000000002 - type: precision_at_5 value: 19.012 - type: recall_at_1 value: 20.835 - type: recall_at_10 value: 49.535000000000004 - type: recall_at_100 value: 73.39099999999999 - type: recall_at_1000 value: 91.01599999999999 - type: recall_at_3 value: 36.379 - type: recall_at_5 value: 42.059999999999995 - task: type: Retrieval dataset: type: mteb/hotpotqa name: MTEB HotpotQA config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 40.945 - type: map_at_10 value: 65.376 - type: map_at_100 value: 66.278 - type: map_at_1000 value: 66.33 - type: map_at_3 value: 61.753 - type: map_at_5 value: 64.077 - type: mrr_at_1 value: 81.891 - type: mrr_at_10 value: 87.256 - type: mrr_at_100 value: 87.392 - type: mrr_at_1000 value: 87.395 - type: mrr_at_3 value: 86.442 - type: mrr_at_5 value: 86.991 - type: ndcg_at_1 value: 81.891 - type: ndcg_at_10 value: 73.654 - type: ndcg_at_100 value: 76.62299999999999 - type: ndcg_at_1000 value: 77.60000000000001 - type: ndcg_at_3 value: 68.71199999999999 - type: ndcg_at_5 value: 71.563 - type: precision_at_1 value: 81.891 - type: precision_at_10 value: 15.409 - type: precision_at_100 value: 1.77 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 44.15 - type: precision_at_5 value: 28.732000000000003 - type: recall_at_1 value: 40.945 - type: recall_at_10 value: 77.04299999999999 - type: recall_at_100 value: 88.508 - type: recall_at_1000 value: 94.943 - type: recall_at_3 value: 66.226 - type: recall_at_5 value: 71.83 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 74.08200000000001 - type: ap value: 68.10929101713998 - type: f1 value: 73.98447117652009 - task: type: Retrieval dataset: type: mteb/msmarco name: MTEB MSMARCO config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 21.729000000000003 - type: map_at_10 value: 34.602 - type: map_at_100 value: 35.756 - type: map_at_1000 value: 35.803000000000004 - type: map_at_3 value: 30.619000000000003 - type: map_at_5 value: 32.914 - type: mrr_at_1 value: 22.364 - type: mrr_at_10 value: 35.183 - type: mrr_at_100 value: 36.287000000000006 - type: mrr_at_1000 value: 36.327999999999996 - type: mrr_at_3 value: 31.258000000000003 - type: mrr_at_5 value: 33.542 - type: ndcg_at_1 value: 22.364 - type: ndcg_at_10 value: 41.765 - type: ndcg_at_100 value: 47.293 - type: ndcg_at_1000 value: 48.457 - type: ndcg_at_3 value: 33.676 - type: ndcg_at_5 value: 37.783 - type: precision_at_1 value: 22.364 - type: precision_at_10 value: 6.662 - type: precision_at_100 value: 0.943 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.435999999999998 - type: precision_at_5 value: 10.764999999999999 - type: recall_at_1 value: 21.729000000000003 - type: recall_at_10 value: 63.815999999999995 - type: recall_at_100 value: 89.265 - type: recall_at_1000 value: 98.149 - type: recall_at_3 value: 41.898 - type: recall_at_5 value: 51.76500000000001 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.73141814865483 - type: f1 value: 92.17518476408004 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.18011855905152 - type: f1 value: 46.70999638311856 - task: type: Classification dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClassification (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 75.24261603375525 - type: f1 value: 74.07895183913367 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringP2P (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 28.43855875387446 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringS2S (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 29.05331990256969 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.92333557498318 - type: f1 value: 64.29789389602692 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.74714189643578 - type: f1 value: 71.672585608315 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.503564225501613 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.410225127136457 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.170019896091908 - type: mrr value: 29.881276831500976 - task: type: Retrieval dataset: type: mteb/nfcorpus name: MTEB NFCorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.544 - type: map_at_10 value: 14.116999999999999 - type: map_at_100 value: 17.522 - type: map_at_1000 value: 19 - type: map_at_3 value: 10.369 - type: map_at_5 value: 12.189 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 56.84 - type: mrr_at_100 value: 57.367000000000004 - type: mrr_at_1000 value: 57.403000000000006 - type: mrr_at_3 value: 54.592 - type: mrr_at_5 value: 56.233 - type: ndcg_at_1 value: 45.82 - type: ndcg_at_10 value: 36.767 - type: ndcg_at_100 value: 33.356 - type: ndcg_at_1000 value: 42.062 - type: ndcg_at_3 value: 42.15 - type: ndcg_at_5 value: 40.355000000000004 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 27.121000000000002 - type: precision_at_100 value: 8.455 - type: precision_at_1000 value: 2.103 - type: precision_at_3 value: 39.628 - type: precision_at_5 value: 35.356 - type: recall_at_1 value: 6.544 - type: recall_at_10 value: 17.928 - type: recall_at_100 value: 32.843 - type: recall_at_1000 value: 65.752 - type: recall_at_3 value: 11.297 - type: recall_at_5 value: 14.357000000000001 - task: type: Retrieval dataset: type: mteb/nq name: MTEB NQ config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 39.262 - type: map_at_10 value: 55.095000000000006 - type: map_at_100 value: 55.93900000000001 - type: map_at_1000 value: 55.955999999999996 - type: map_at_3 value: 50.93 - type: map_at_5 value: 53.491 - type: mrr_at_1 value: 43.598 - type: mrr_at_10 value: 57.379999999999995 - type: mrr_at_100 value: 57.940999999999995 - type: mrr_at_1000 value: 57.952000000000005 - type: mrr_at_3 value: 53.998000000000005 - type: mrr_at_5 value: 56.128 - type: ndcg_at_1 value: 43.598 - type: ndcg_at_10 value: 62.427 - type: ndcg_at_100 value: 65.759 - type: ndcg_at_1000 value: 66.133 - type: ndcg_at_3 value: 54.745999999999995 - type: ndcg_at_5 value: 58.975 - type: precision_at_1 value: 43.598 - type: precision_at_10 value: 9.789 - type: precision_at_100 value: 1.171 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 24.295 - type: precision_at_5 value: 17.028 - type: recall_at_1 value: 39.262 - type: recall_at_10 value: 82.317 - type: recall_at_100 value: 96.391 - type: recall_at_1000 value: 99.116 - type: recall_at_3 value: 62.621 - type: recall_at_5 value: 72.357 - task: type: Classification dataset: type: ag_news name: MTEB NewsClassification config: default split: test revision: eb185aade064a813bc0b7f42de02595523103ca4 metrics: - type: accuracy value: 78.17500000000001 - type: f1 value: 78.01940892857273 - task: type: PairClassification dataset: type: GEM/opusparcus name: MTEB OpusparcusPC (en) config: en split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.89816700610999 - type: cos_sim_ap value: 100 - type: cos_sim_f1 value: 99.9490575649516 - type: cos_sim_precision value: 100 - type: cos_sim_recall value: 99.89816700610999 - type: dot_accuracy value: 99.89816700610999 - type: dot_ap value: 100 - type: dot_f1 value: 99.9490575649516 - type: dot_precision value: 100 - type: dot_recall value: 99.89816700610999 - type: euclidean_accuracy value: 99.89816700610999 - type: euclidean_ap value: 100 - type: euclidean_f1 value: 99.9490575649516 - type: euclidean_precision value: 100 - type: euclidean_recall value: 99.89816700610999 - type: manhattan_accuracy value: 99.89816700610999 - type: manhattan_ap value: 100 - type: manhattan_f1 value: 99.9490575649516 - type: manhattan_precision value: 100 - type: manhattan_recall value: 99.89816700610999 - type: max_accuracy value: 99.89816700610999 - type: max_ap value: 100 - type: max_f1 value: 99.9490575649516 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (en) config: en split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 61 - type: cos_sim_ap value: 59.630757252602464 - type: cos_sim_f1 value: 62.37521514629949 - type: cos_sim_precision value: 45.34534534534534 - type: cos_sim_recall value: 99.88974641675854 - type: dot_accuracy value: 61 - type: dot_ap value: 59.631527308059006 - type: dot_f1 value: 62.37521514629949 - type: dot_precision value: 45.34534534534534 - type: dot_recall value: 99.88974641675854 - type: euclidean_accuracy value: 61 - type: euclidean_ap value: 59.630757252602464 - type: euclidean_f1 value: 62.37521514629949 - type: euclidean_precision value: 45.34534534534534 - type: euclidean_recall value: 99.88974641675854 - type: manhattan_accuracy value: 60.9 - type: manhattan_ap value: 59.613947780462254 - type: manhattan_f1 value: 62.37521514629949 - type: manhattan_precision value: 45.34534534534534 - type: manhattan_recall value: 99.88974641675854 - type: max_accuracy value: 61 - type: max_ap value: 59.631527308059006 - type: max_f1 value: 62.37521514629949 - task: type: Retrieval dataset: type: mteb/quora name: MTEB QuoraRetrieval config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 69.963 - type: map_at_10 value: 83.59400000000001 - type: map_at_100 value: 84.236 - type: map_at_1000 value: 84.255 - type: map_at_3 value: 80.69800000000001 - type: map_at_5 value: 82.568 - type: mrr_at_1 value: 80.58999999999999 - type: mrr_at_10 value: 86.78200000000001 - type: mrr_at_100 value: 86.89099999999999 - type: mrr_at_1000 value: 86.893 - type: mrr_at_3 value: 85.757 - type: mrr_at_5 value: 86.507 - type: ndcg_at_1 value: 80.60000000000001 - type: ndcg_at_10 value: 87.41799999999999 - type: ndcg_at_100 value: 88.723 - type: ndcg_at_1000 value: 88.875 - type: ndcg_at_3 value: 84.565 - type: ndcg_at_5 value: 86.236 - type: precision_at_1 value: 80.60000000000001 - type: precision_at_10 value: 13.239 - type: precision_at_100 value: 1.5150000000000001 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.947 - type: precision_at_5 value: 24.354 - type: recall_at_1 value: 69.963 - type: recall_at_10 value: 94.553 - type: recall_at_100 value: 99.104 - type: recall_at_1000 value: 99.872 - type: recall_at_3 value: 86.317 - type: recall_at_5 value: 91.023 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 47.52890410998761 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 62.760692287940486 - task: type: Retrieval dataset: type: mteb/scidocs name: MTEB SCIDOCS config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 5.093 - type: map_at_10 value: 12.695 - type: map_at_100 value: 14.824000000000002 - type: map_at_1000 value: 15.123000000000001 - type: map_at_3 value: 8.968 - type: map_at_5 value: 10.828 - type: mrr_at_1 value: 25.1 - type: mrr_at_10 value: 35.894999999999996 - type: mrr_at_100 value: 36.966 - type: mrr_at_1000 value: 37.019999999999996 - type: mrr_at_3 value: 32.467 - type: mrr_at_5 value: 34.416999999999994 - type: ndcg_at_1 value: 25.1 - type: ndcg_at_10 value: 21.096999999999998 - type: ndcg_at_100 value: 29.202 - type: ndcg_at_1000 value: 34.541 - type: ndcg_at_3 value: 19.875 - type: ndcg_at_5 value: 17.497 - type: precision_at_1 value: 25.1 - type: precision_at_10 value: 10.9 - type: precision_at_100 value: 2.255 - type: precision_at_1000 value: 0.35400000000000004 - type: precision_at_3 value: 18.367 - type: precision_at_5 value: 15.299999999999999 - type: recall_at_1 value: 5.093 - type: recall_at_10 value: 22.092 - type: recall_at_100 value: 45.778 - type: recall_at_1000 value: 71.985 - type: recall_at_3 value: 11.167 - type: recall_at_5 value: 15.501999999999999 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 74.04386981759481 - type: cos_sim_spearman value: 69.12484963763646 - type: euclidean_pearson value: 71.49384353291062 - type: euclidean_spearman value: 69.12484548317074 - type: manhattan_pearson value: 71.49828173987272 - type: manhattan_spearman value: 69.08350274367014 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 66.95372527615659 - type: cos_sim_spearman value: 66.96821894433991 - type: euclidean_pearson value: 64.675348002074 - type: euclidean_spearman value: 66.96821894433991 - type: manhattan_pearson value: 64.5965887073831 - type: manhattan_spearman value: 66.88569076794741 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 77.34698437961983 - type: cos_sim_spearman value: 79.1153001117325 - type: euclidean_pearson value: 78.53562874696966 - type: euclidean_spearman value: 79.11530018205724 - type: manhattan_pearson value: 78.46484988944093 - type: manhattan_spearman value: 79.01416027493104 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 68.81220371935373 - type: cos_sim_spearman value: 68.50538405089604 - type: euclidean_pearson value: 68.69204272683749 - type: euclidean_spearman value: 68.50534223912419 - type: manhattan_pearson value: 68.67300120149523 - type: manhattan_spearman value: 68.45404301623115 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 78.2464678879813 - type: cos_sim_spearman value: 79.92003940566667 - type: euclidean_pearson value: 79.8080778793964 - type: euclidean_spearman value: 79.92003940566667 - type: manhattan_pearson value: 79.80153621444681 - type: manhattan_spearman value: 79.91293261418134 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 76.31179207708662 - type: cos_sim_spearman value: 78.65597349856115 - type: euclidean_pearson value: 78.76937027472678 - type: euclidean_spearman value: 78.65597349856115 - type: manhattan_pearson value: 78.77129513300605 - type: manhattan_spearman value: 78.62640467680775 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 79.43158429552561 - type: cos_sim_spearman value: 81.46108646565362 - type: euclidean_pearson value: 81.47071791452292 - type: euclidean_spearman value: 81.46108646565362 - type: manhattan_pearson value: 81.56920643846031 - type: manhattan_spearman value: 81.42226241399516 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 66.89546474141514 - type: cos_sim_spearman value: 65.8393752170531 - type: euclidean_pearson value: 67.2580522762307 - type: euclidean_spearman value: 65.8393752170531 - type: manhattan_pearson value: 67.45157729300522 - type: manhattan_spearman value: 66.19470854403802 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 71.39566306334434 - type: cos_sim_spearman value: 74.0981396086974 - type: euclidean_pearson value: 73.7834496259745 - type: euclidean_spearman value: 74.09803741302046 - type: manhattan_pearson value: 73.79958138780945 - type: manhattan_spearman value: 74.09894837555905 - task: type: STS dataset: type: PhilipMay/stsb_multi_mt name: MTEB STSBenchmarkMultilingualSTS (en) config: en split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 71.39566311006806 - type: cos_sim_spearman value: 74.0981396086974 - type: euclidean_pearson value: 73.78344970897099 - type: euclidean_spearman value: 74.09803741302046 - type: manhattan_pearson value: 73.79958147136705 - type: manhattan_spearman value: 74.09894837555905 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 80.81059564334683 - type: mrr value: 94.62696617108381 - task: type: Retrieval dataset: type: mteb/scifact name: MTEB SciFact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 68.614 - type: map_at_100 value: 69.109 - type: map_at_1000 value: 69.134 - type: map_at_3 value: 65.735 - type: map_at_5 value: 67.42099999999999 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 69.94200000000001 - type: mrr_at_100 value: 70.254 - type: mrr_at_1000 value: 70.28 - type: mrr_at_3 value: 67.72200000000001 - type: mrr_at_5 value: 69.18900000000001 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 73.548 - type: ndcg_at_100 value: 75.381 - type: ndcg_at_1000 value: 75.991 - type: ndcg_at_3 value: 68.685 - type: ndcg_at_5 value: 71.26 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.833 - type: precision_at_100 value: 1.08 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.889000000000003 - type: precision_at_5 value: 17.8 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 87.13300000000001 - type: recall_at_100 value: 95 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.211 - type: recall_at_5 value: 80.63900000000001 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81881188118813 - type: cos_sim_ap value: 95.21196473745837 - type: cos_sim_f1 value: 90.69767441860465 - type: cos_sim_precision value: 91.71779141104295 - type: cos_sim_recall value: 89.7 - type: dot_accuracy value: 99.81881188118813 - type: dot_ap value: 95.21196473745837 - type: dot_f1 value: 90.69767441860465 - type: dot_precision value: 91.71779141104295 - type: dot_recall value: 89.7 - type: euclidean_accuracy value: 99.81881188118813 - type: euclidean_ap value: 95.21196473745839 - type: euclidean_f1 value: 90.69767441860465 - type: euclidean_precision value: 91.71779141104295 - type: euclidean_recall value: 89.7 - type: manhattan_accuracy value: 99.81287128712871 - type: manhattan_ap value: 95.16667174835017 - type: manhattan_f1 value: 90.41095890410959 - type: manhattan_precision value: 91.7610710607621 - type: manhattan_recall value: 89.1 - type: max_accuracy value: 99.81881188118813 - type: max_ap value: 95.21196473745839 - type: max_f1 value: 90.69767441860465 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 59.54942204515638 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 39.42892282672948 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.189033075914324 - type: mrr value: 51.97014790764791 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.09466569775977 - type: cos_sim_spearman value: 30.31058660775912 - type: dot_pearson value: 30.09466438861689 - type: dot_spearman value: 30.31058660775912 - task: type: Retrieval dataset: type: mteb/trec-covid name: MTEB TRECCOVID config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.253 - type: map_at_10 value: 2.07 - type: map_at_100 value: 12.679000000000002 - type: map_at_1000 value: 30.412 - type: map_at_3 value: 0.688 - type: map_at_5 value: 1.079 - type: mrr_at_1 value: 96 - type: mrr_at_10 value: 98 - type: mrr_at_100 value: 98 - type: mrr_at_1000 value: 98 - type: mrr_at_3 value: 98 - type: mrr_at_5 value: 98 - type: ndcg_at_1 value: 89 - type: ndcg_at_10 value: 79.646 - type: ndcg_at_100 value: 62.217999999999996 - type: ndcg_at_1000 value: 55.13400000000001 - type: ndcg_at_3 value: 83.458 - type: ndcg_at_5 value: 80.982 - type: precision_at_1 value: 96 - type: precision_at_10 value: 84.6 - type: precision_at_100 value: 64.34 - type: precision_at_1000 value: 24.534 - type: precision_at_3 value: 88.667 - type: precision_at_5 value: 85.6 - type: recall_at_1 value: 0.253 - type: recall_at_10 value: 2.253 - type: recall_at_100 value: 15.606 - type: recall_at_1000 value: 51.595 - type: recall_at_3 value: 0.7100000000000001 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: type: mteb/touche2020 name: MTEB Touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.0540000000000003 - type: map_at_10 value: 13.078999999999999 - type: map_at_100 value: 19.468 - type: map_at_1000 value: 21.006 - type: map_at_3 value: 6.8629999999999995 - type: map_at_5 value: 9.187 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 56.735 - type: mrr_at_100 value: 57.352000000000004 - type: mrr_at_1000 value: 57.352000000000004 - type: mrr_at_3 value: 52.721 - type: mrr_at_5 value: 54.66 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 31.469 - type: ndcg_at_100 value: 42.016999999999996 - type: ndcg_at_1000 value: 52.60399999999999 - type: ndcg_at_3 value: 35.894 - type: ndcg_at_5 value: 33.873 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 27.346999999999998 - type: precision_at_100 value: 8.327 - type: precision_at_1000 value: 1.551 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.469 - type: recall_at_1 value: 3.0540000000000003 - type: recall_at_10 value: 19.185 - type: recall_at_100 value: 51.056000000000004 - type: recall_at_1000 value: 82.814 - type: recall_at_3 value: 7.961 - type: recall_at_5 value: 11.829 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 64.9346 - type: ap value: 12.121605736777527 - type: f1 value: 50.169902005887955 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.72608941709111 - type: f1 value: 57.0702928875253 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 37.72671554400943 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.84556237706384 - type: cos_sim_ap value: 63.28364215788651 - type: cos_sim_f1 value: 60.00000000000001 - type: cos_sim_precision value: 54.45161290322581 - type: cos_sim_recall value: 66.80738786279683 - type: dot_accuracy value: 82.84556237706384 - type: dot_ap value: 63.28364302860433 - type: dot_f1 value: 60.00000000000001 - type: dot_precision value: 54.45161290322581 - type: dot_recall value: 66.80738786279683 - type: euclidean_accuracy value: 82.84556237706384 - type: euclidean_ap value: 63.28363625097978 - type: euclidean_f1 value: 60.00000000000001 - type: euclidean_precision value: 54.45161290322581 - type: euclidean_recall value: 66.80738786279683 - type: manhattan_accuracy value: 82.86940454193241 - type: manhattan_ap value: 63.244773709836764 - type: manhattan_f1 value: 60.12680942696495 - type: manhattan_precision value: 55.00109433136353 - type: manhattan_recall value: 66.3060686015831 - type: max_accuracy value: 82.86940454193241 - type: max_ap value: 63.28364302860433 - type: max_f1 value: 60.12680942696495 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.32033220786278 - type: cos_sim_ap value: 84.71928176006863 - type: cos_sim_f1 value: 76.51483333969684 - type: cos_sim_precision value: 75.89184276300841 - type: cos_sim_recall value: 77.14813674160764 - type: dot_accuracy value: 88.32033220786278 - type: dot_ap value: 84.71928330149228 - type: dot_f1 value: 76.51483333969684 - type: dot_precision value: 75.89184276300841 - type: dot_recall value: 77.14813674160764 - type: euclidean_accuracy value: 88.32033220786278 - type: euclidean_ap value: 84.71928045384345 - type: euclidean_f1 value: 76.51483333969684 - type: euclidean_precision value: 75.89184276300841 - type: euclidean_recall value: 77.14813674160764 - type: manhattan_accuracy value: 88.27570147863545 - type: manhattan_ap value: 84.68523541579755 - type: manhattan_f1 value: 76.51512269355146 - type: manhattan_precision value: 75.62608107091825 - type: manhattan_recall value: 77.42531567600862 - type: max_accuracy value: 88.32033220786278 - type: max_ap value: 84.71928330149228 - type: max_f1 value: 76.51512269355146 - task: type: Clustering dataset: type: jinaai/cities_wiki_clustering name: MTEB WikiCitiesClustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 85.30624598674467 license: apache-2.0 --- <h1 align="center">Snowflake's Arctic-embed-m</h1> <h4 align="center"> <p> <a href=#news>News</a> | <a href=#models>Models</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#contact">Contact</a> | <a href="#faq">FAQ</a> <a href="#license">License</a> | <a href="#acknowledgement">Acknowledgement</a> <p> </h4> ## News 07/26/2024: Release preprint [[2407.18887] Embedding And Clustering Your Data Can Improve Contrastive Pretraining](https://arxiv.org/abs/2407.18887) on arXiv. 07/18/2024: Release of `snowflake-arctic-embed-m-v1.5`, capable of producing highly compressible embedding vectors that preserve quality even when squished as small as 128 bytes per vector. Details about the development of this model are available in the [launch post on the Snowflake engineering blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/). 05/10/2024: Release the [technical report on Arctic Embed](https://arxiv.org/abs/2405.05374) 04/16/2024: Release the ** snowflake-arctic-embed ** family of text embedding models. The releases are state-of-the-art for Retrieval quality at each of their representative size profiles. [Technical Report]() is coming shortly. For more details, please refer to our Github: [Arctic-Text-Embed](https://github.com/Snowflake-Labs/arctic-embed). ## Models snowflake-arctic-embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance. The `snowflake-arctic-embedding` models achieve **state-of-the-art performance on the MTEB/BEIR leaderboard** for each of their size variants. Evaluation is performed using these [scripts](https://github.com/Snowflake-Labs/snowflake-arctic-embed/tree/main/src). As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models. The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. First, the models are trained with large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data. Following pretraining models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy. A detailed technical report can be found [here](https://arxiv.org/abs/2405.05374). | Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension | | ----------------------------------------------------------------------- | -------------------------------- | --------------------- | ------------------- | | [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | 22 | 384 | | [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | 33 | 384 | | [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | 110 | 768 | | [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | 137 | 768 | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | 335 | 1024 | Aside from being great open-source models, the largest model, [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/), can serve as a natural replacement for closed-source embedding, as shown below. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | | Google-gecko-text-embedding | 55.7 | | text-embedding-3-large | 55.44 | | Cohere-embed-english-v3.0 | 55.00 | | bge-large-en-v1.5 | 54.29 | ### [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs) This tiny model packs quite the punch. Based on the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model with only 22m parameters and 384 dimensions, this model should meet even the strictest latency/TCO budgets. Despite its size, its retrieval accuracy is closer to that of models with 100m paramers. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------- | -------------------------------- | | [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | | GIST-all-MiniLM-L6-v2 | 45.12 | | gte-tiny | 44.92 | | all-MiniLM-L6-v2 | 41.95 | | bge-micro-v2 | 42.56 | ### [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) Based on the [intfloat/e5-small-unsupervised](https://huggingface.co/intfloat/e5-small-unsupervised) model, this small model does not trade off retrieval accuracy for its small size. With only 33m parameters and 384 dimensions, this model should easily allow scaling to large datasets. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | | bge-small-en-v1.5 | 51.68 | | Cohere-embed-english-light-v3.0 | 51.34 | | text-embedding-3-small | 51.08 | | e5-small-v2 | 49.04 | ### [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) Based on the [intfloat/e5-base-unsupervised](https://huggingface.co/intfloat/e5-base-unsupervised) model, this medium model is the workhorse that provides the best retrieval performance without slowing down inference. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | | bge-base-en-v1.5 | 53.25 | | nomic-embed-text-v1.5 | 53.25 | | GIST-Embedding-v0 | 52.31 | | gte-base | 52.31 | ### [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) Based on the [nomic-ai/nomic-embed-text-v1-unsupervised](https://huggingface.co/nomic-ai/nomic-embed-text-v1-unsupervised) model, this long-context variant of our medium-sized model is perfect for workloads that can be constrained by the regular 512 token context of our other models. Without the use of RPE, this model supports up to 2048 tokens. With RPE, it can scale to 8192! | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | | nomic-embed-text-v1.5 | 53.01 | | nomic-embed-text-v1 | 52.81 | ### [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) Based on the [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) model, this large model is a direct drop-in for closed APIs and delivers the most accurate retrieval experience. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | | UAE-Large-V1 | 54.66 | | bge-large-en-v1.5 | 54.29 | | mxbai-embed-large-v1 | 54.39 | | e5-Large-v2 | 50.56 | ## Usage ### Using Sentence Transformers You can use the sentence-transformers package to use an snowflake-arctic-embed model, as shown below. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Snowflake/snowflake-arctic-embed-m") queries = ['what is snowflake?', 'Where can I get the best tacos?'] documents = ['The Data Cloud!', 'Mexico City of Course!'] query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents) scores = query_embeddings @ document_embeddings.T for query, query_scores in zip(queries, scores): doc_score_pairs = list(zip(documents, query_scores)) doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) # Output passages & scores print("Query:", query) for document, score in doc_score_pairs: print(score, document) ``` ``` Query: what is snowflake? 0.20051965 The Data Cloud! 0.07660701 Mexico City of Course! Query: Where can I get the best tacos? 0.24481852 Mexico City of Course! 0.15664819 The Data Cloud! ``` ### Using Huggingface transformers You can use the transformers package to use an snowflake-arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query). ```python import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-m') model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-m', add_pooling_layer=False) model.eval() query_prefix = 'Represent this sentence for searching relevant passages: ' queries = ['what is snowflake?', 'Where can I get the best tacos?'] queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries] query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512) documents = ['The Data Cloud!', 'Mexico City of Course!'] document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512) # Compute token embeddings with torch.no_grad(): query_embeddings = model(**query_tokens)[0][:, 0] document_embeddings = model(**document_tokens)[0][:, 0] # normalize embeddings query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1) document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1) scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1)) for query, query_scores in zip(queries, scores): doc_score_pairs = list(zip(documents, query_scores)) doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for document, score in doc_score_pairs: print(score, document) ``` ### Using Transformers.js If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) by running: ```bash npm i @xenova/transformers ``` You can then use the model to compute embeddings as follows: ```js import { pipeline, dot } from '@xenova/transformers'; // Create feature extraction pipeline const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-m', { quantized: false, // Comment out this line to use the quantized version }); // Generate sentence embeddings const sentences = [ 'Represent this sentence for searching relevant passages: Where can I get the best tacos?', 'The Data Cloud!', 'Mexico City of Course!', ] const output = await extractor(sentences, { normalize: true, pooling: 'cls' }); // Compute similarity scores const [source_embeddings, ...document_embeddings ] = output.tolist(); const similarities = document_embeddings.map(x => dot(source_embeddings, x)); console.log(similarities); // [0.15664823859882132, 0.24481869975470627] ``` ## FAQ TBD ## Contact Feel free to open an issue or pull request if you have any questions or suggestions about this project. You also can email Daniel Campos(daniel.campos@snowflake.com). ## License Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge. ## Acknowledgement We want to thank the open-source community, which has provided the great building blocks upon which we could make our models. We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible. We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work. We also thank the open-source community for producing the great models we could build on top of and making these releases possible. Finally, we thank the researchers who created BEIR and MTEB benchmarks. It is largely thanks to their tireless work to define what better looks like that we could improve model performance.
lewtun/tiny-random-mt5
lewtun
"2022-09-15T15:04:49Z"
312,972
0
transformers
[ "transformers", "pytorch", "mt5", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-09-15T15:03:33Z"
Entry not found
tohoku-nlp/bert-base-japanese-v3
tohoku-nlp
"2023-05-19T00:31:53Z"
312,136
47
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "pretraining", "ja", "dataset:cc100", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-05-19T00:13:53Z"
--- license: apache-2.0 datasets: - cc100 - wikipedia language: - ja widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (unidic-lite with whole word masking, CC-100 and jawiki-20230102) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The model is trained on the Japanese portion of [CC-100 dataset](https://data.statmt.org/cc-100/) and the Japanese version of Wikipedia. For Wikipedia, we generated a text corpus from the [Wikipedia Cirrussearch dump file](https://dumps.wikimedia.org/other/cirrussearch/) as of January 2, 2023. The corpus files generated from CC-100 and Wikipedia are 74.3GB and 4.9GB in size and consist of approximately 392M and 34M sentences, respectively. For the purpose of splitting texts into sentences, we used [fugashi](https://github.com/polm/fugashi) with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary (v0.0.7). ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. We used [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training We trained the model first on the CC-100 corpus for 1M steps and then on the Wikipedia corpus for another 1M steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/). ## Licenses The pretrained models are distributed under the Apache License 2.0. ## Acknowledgments This model is trained with Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/) program.
BAAI/bge-large-zh-v1.5
BAAI
"2024-04-02T14:00:04Z"
310,310
412
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "zh", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-09-12T05:22:11Z"
--- license: mit language: - zh tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
seara/rubert-tiny2-russian-sentiment
seara
"2024-10-29T18:17:02Z"
309,954
15
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment-analysis", "multi-class-classification", "sentiment analysis", "rubert", "sentiment", "tiny", "russian", "multiclass", "classification", "ru", "dataset:sismetanin/rureviews", "dataset:RuSentiment", "dataset:LinisCrowd2015", "dataset:LinisCrowd2016", "dataset:KaggleRussianNews", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-05-14T15:21:52Z"
--- license: mit language: - ru metrics: - f1 - roc_auc - precision - recall pipeline_tag: text-classification tags: - sentiment-analysis - multi-class-classification - sentiment analysis - rubert - sentiment - bert - tiny - russian - multiclass - classification datasets: - sismetanin/rureviews - RuSentiment - LinisCrowd2015 - LinisCrowd2016 - KaggleRussianNews --- This is [RuBERT-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for __sentiment classification__ of short __Russian__ texts. The task is a __multi-class classification__ with the following labels: ```yaml 0: neutral 1: positive 2: negative ``` Label to Russian label: ```yaml neutral: нейтральный positive: позитивный negative: негативный ``` ## Usage ```python from transformers import pipeline model = pipeline(model="seara/rubert-tiny2-russian-sentiment") model("Привет, ты мне нравишься!") # [{'label': 'positive', 'score': 0.9398769736289978}] ``` ## Dataset This model was trained on the union of the following datasets: - Kaggle Russian News Dataset - Linis Crowd 2015 - Linis Crowd 2016 - RuReviews - RuSentiment An overview of the training data can be found on [S. Smetanin Github repository](https://github.com/sismetanin/sentiment-analysis-in-russian). __Download links for all Russian sentiment datasets collected by Smetanin can be found in this [repository](https://github.com/searayeah/russian-sentiment-emotion-datasets).__ ## Training Training were done in this [project](https://github.com/searayeah/bert-russian-sentiment-emotion) with this parameters: ```yaml tokenizer.max_length: 512 batch_size: 64 optimizer: adam lr: 0.00001 weight_decay: 0 epochs: 5 ``` Train/validation/test splits are 80%/10%/10%. ## Eval results (on test split) | |neutral|positive|negative|macro avg|weighted avg| |---------|-------|--------|--------|---------|------------| |precision|0.7 |0.84 |0.74 |0.76 |0.75 | |recall |0.74 |0.83 |0.69 |0.75 |0.75 | |f1-score |0.72 |0.83 |0.71 |0.75 |0.75 | |auc-roc |0.85 |0.95 |0.91 |0.9 |0.9 | |support |5196 |3831 |3599 |12626 |12626 |
trl-internal-testing/tiny-T5ForConditionalGeneration-correct-vocab
trl-internal-testing
"2024-09-30T16:15:14Z"
309,634
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-02-08T15:32:51Z"
Entry not found
Systran/faster-whisper-medium
Systran
"2023-11-23T11:13:59Z"
308,370
19
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
"2023-11-23T09:51:42Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper medium model for CTranslate2 This repository contains the conversion of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("medium") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-medium --output_dir faster-whisper-medium \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-medium).**
microsoft/swinv2-tiny-patch4-window16-256
microsoft
"2022-12-10T10:09:55Z"
306,117
3
transformers
[ "transformers", "pytorch", "swinv2", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2111.09883", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-06-14T06:17:52Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer v2 (tiny-sized model) Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256") model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window16-256") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-09883, author = {Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution}, journal = {CoRR}, volume = {abs/2111.09883}, year = {2021}, url = {https://arxiv.org/abs/2111.09883}, eprinttype = {arXiv}, eprint = {2111.09883}, timestamp = {Thu, 02 Dec 2021 15:54:22 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
vikhyatk/moondream1
vikhyatk
"2024-02-07T02:57:53Z"
305,428
484
transformers
[ "transformers", "pytorch", "safetensors", "moondream1", "text-generation", "custom_code", "en", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-20T18:10:04Z"
--- language: - en --- # 🌔 moondream1 1.6B parameter model built by [@vikhyatk](https://x.com/vikhyatk) using SigLIP, Phi-1.5 and the LLaVa training dataset. The model is release for research purposes only, commercial use is not allowed. Try it out on [Huggingface Spaces](https://huggingface.co/spaces/vikhyatk/moondream1)! **Usage** ``` pip install transformers timm einops ``` ```python from transformers import AutoModelForCausalLM, CodeGenTokenizerFast as Tokenizer from PIL import Image model_id = "vikhyatk/moondream1" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) tokenizer = Tokenizer.from_pretrained(model_id) image = Image.open('<IMAGE_PATH>') enc_image = model.encode_image(image) print(model.answer_question(enc_image, "<QUESTION>", tokenizer)) ``` ## Benchmarks | Model | Parameters | VQAv2 | GQA | TextVQA | | --- | --- | --- | --- | --- | | LLaVA-1.5 | 13.3B | 80.0 | 63.3 | 61.3 | | LLaVA-1.5 | 7.3B | 78.5 | 62.0 | 58.2 | | **moondream1** | 1.6B | 74.7 | 57.9 | 35.6 | ## Examples | Image | Examples | | --- | --- | | ![](assets/demo-1.jpg) | **What is the title of this book?**<br>The Little Book of Deep Learning<br><br>**What can you tell me about this book?**<br>The book in the image is titled "The Little Book of Deep Learning." It appears to be a guide or manual that provides information and instructions on the subject of deep learning. The book is described as being small and thin, making it easy to carry and read. It is set in a room with a view outside, suggesting that the person holding the book might be in a living or working space with a view. The content of the book is likely to be complex, as the title implies a focus on advanced concepts in deep learning.<br><br>**Where is the person standing?**<br> The person is standing on a balcony or deck, which is outside. | | ![](assets/demo-2.jpg) | **What type of food is the girl holding?**<br>The girl is holding a hamburger.<br><br>**What color is the woman's hair?**<br>The woman's hair is white.<br><br>**What is the girl doing in the image?**<br>The girl in the image is eating a hamburger. | | ![](assets/demo-3.jpg) | **What color is the train?**<br>The train is red.<br><br>**What is the bus doing in the image?**<br> The bus in the image is driving down a city street, passing through an intersection.<br><br>**What kind of public transportation is in the image?**<br>The image features a red train or trolley as the public transportation option. | | ![](assets/demo-4.jpg) | **What is the girl looking at?**<br>The girl is looking at her reflection in the mirror.<br><br>**What color is the woman's hair?**<br>The woman's hair is brown.<br><br>**What is the woman doing in the mirror?**<br>The woman is adjusting her hair in the mirror. | | ![](assets/demo-5.jpg) | **What might be the reason for the dog's aggressive behavior?**<br>The dog's aggressive behavior, including its growling, baring its teeth, and showing its sharp teeth, could be due to various reasons. It is possible that the dog is feeling threatened, scared, or defensive about something in its environment, such as another animal, a loud noise, or a person nearby. Alternatively, the dog might be reacting to a command it received from its owner, or it could be experiencing discomfort or pain, which is causing it to display aggressive behavior. It is essential to observe the dog's body language and context to determine the underlying cause of its aggressive behavior and respond accordingly to address its needs or concerns.<br><br>**What kind of dog is in the picture?**<br>The dog in the picture is a small, black and white chihuahua. |
Xenova/tiny-random-Phi3ForCausalLM
Xenova
"2024-05-15T21:39:48Z"
304,166
0
transformers
[ "transformers", "onnx", "safetensors", "phi3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-15T21:17:51Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Code used to generate the model: ```py from transformers import Phi3Config, Phi3ForCausalLM, AutoTokenizer model = Phi3ForCausalLM(Phi3Config( hidden_size=32, intermediate_size=64, num_attention_heads=4, num_hidden_layers=2, num_key_value_heads=4, pad_token_id=32000, sliding_window=2047, )) tokenizer = AutoTokenizer.from_pretrained('microsoft/Phi-3-mini-4k-instruct') model.push_to_hub('Xenova/tiny-random-Phi3ForCausalLM') tokenizer.push_to_hub('Xenova/tiny-random-Phi3ForCausalLM') ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sentence-transformers/multi-qa-distilbert-cos-v1
sentence-transformers
"2024-11-05T17:18:43Z"
303,648
21
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "openvino", "distilbert", "fill-mask", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:search_qa", "dataset:eli5", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/QQP", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/Amazon-QA", "dataset:embedding-data/WikiAnswers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - en library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - search_qa - eli5 - natural_questions - trivia_qa - embedding-data/QQP - embedding-data/PAQ_pairs - embedding-data/Amazon-QA - embedding-data/WikiAnswers pipeline_tag: sentence-similarity --- # multi-qa-distilbert-cos-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/multi-qa-distilbert-cos-v1') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take average of all tokens def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-distilbert-cos-v1") model = AutoModel.from_pretrained("sentence-transformers/multi-qa-distilbert-cos-v1") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 768 | | Produces normalized embeddings | Yes | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance | Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. ---- ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages. Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text. ## Training procedure The full training script is accessible in this current repository: `train_script.py`. ### Pre-training We use the pretrained [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. #### Training We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20. | Dataset | Number of training tuples | |--------------------------------------------------------|:--------------------------:| | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 | | [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 | | [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 | | **Total** | **214,988,242** |
facebook/wav2vec2-xlsr-53-espeak-cv-ft
facebook
"2021-12-10T17:18:39Z"
301,774
24
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "phoneme-recognition", "dataset:common_voice", "arxiv:2109.11680", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: multi-lingual datasets: - common_voice tags: - speech - audio - automatic-speech-recognition - phoneme-recognition widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac license: apache-2.0 --- # Wav2Vec2-Large-XLSR-53 finetuned on multi-lingual Common Voice This checkpoint leverages the pretrained checkpoint [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and is fine-tuned on [CommonVoice](https://huggingface.co/datasets/common_voice) to recognize phonetic labels in multiple languages. When using the model make sure that your speech input is sampled at 16kHz. Note that the model outputs a string of phonetic labels. A dictionary mapping phonetic labels to words has to be used to map the phonetic output labels to output words. [Paper: Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) Authors: Qiantong Xu, Alexei Baevski, Michael Auli **Abstract** Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xlsr-53-espeak-cv-ft") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-xlsr-53-espeak-cv-ft") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # retrieve logits with torch.no_grad(): logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) # => should give ['m ɪ s t ɚ k w ɪ l t ɚ ɪ z ð ɪ ɐ p ɑː s əl l ʌ v ð ə m ɪ d əl k l æ s ɪ z æ n d w iː aʊ ɡ l æ d t ə w ɛ l k ə m h ɪ z ɡ ɑː s p ə'] ```
google-t5/t5-3b
google-t5
"2024-01-29T15:44:49Z"
301,441
42
transformers
[ "transformers", "pytorch", "tf", "safetensors", "t5", "text2text-generation", "summarization", "translation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - summarization - translation datasets: - c4 --- # Model Card for T5-3B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Citation](#citation) 8. [Model Card Authors](#model-card-authors) 9. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html): > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-3B is the checkpoint with 3 billion parameters. - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints) - **Model type:** Language model - **Language(s) (NLP):** English, French, Romanian, German - **License:** Apache 2.0 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5) - **Resources for more information:** - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use and Downstream Use The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model: > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Recommendations More information needed. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. Thereby, the following datasets were being used for (1.) and (2.): 1. **Datasets used for Unsupervised denoising objective**: - [C4](https://huggingface.co/datasets/c4) - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr) 2. **Datasets used for Supervised text-to-text language modeling objective** - Sentence acceptability judgment - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471) - Sentiment analysis - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - Paraphrasing/sentence similarity - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002) - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055) - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - Natural language inference - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426) - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250) - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9) - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf) - Sentence completion - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning) - Word sense disambiguation - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121) - Question answering - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023) - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885) - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044) ## Training Procedure In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write: > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details. # Evaluation ## Testing Data, Factors & Metrics The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details. ## Results For full results for T5-3B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context on how to get started with this checkpoint.
vikp/texify
vikp
"2024-01-03T05:32:08Z"
300,641
10
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
image-text-to-text
"2023-12-18T21:04:35Z"
--- license: cc-by-sa-4.0 --- OCR equation images and text to latex. See [texify](https://github.com/VikParuchuri/texify).
fxmarty/pix2struct-tiny-random
fxmarty
"2023-06-01T09:47:36Z"
300,506
2
transformers
[ "transformers", "pytorch", "pix2struct", "image-text-to-text", "image-to-text", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
"2023-06-01T09:33:18Z"
--- license: mit pipeline_tag: image-to-text ---
deepset/roberta-base-squad2-distilled
deepset
"2024-09-26T08:04:05Z"
298,722
14
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "exbert", "en", "dataset:squad_v2", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- language: en license: mit tags: - exbert datasets: - squad_v2 thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg model-index: - name: deepset/roberta-base-squad2-distilled results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 80.8593 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVjNzkxNmNiNDkzNzdiYjJjZGM3ZTViMGJhOGM2ZjFmYjg1MjYxMDM2YzM5NWMwNDIyYzNlN2QwNGYyNDMzZSIsInZlcnNpb24iOjF9.Rgww8tf8D7nF2dh2U_DMrFzmp87k8s7RFibrDXSvQyA66PGWXwjlsd1552lzjHnNV5hvHUM1-h3PTuY_5p64BA - type: f1 value: 84.0104 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTAyZDViNWYzNjA4OWQ5MzgyYmQ2ZDlhNWRhMTIzYTYxYzViMmI4NWE4ZGU5MzVhZTAwNTRlZmRlNWUwMjI0ZSIsInZlcnNpb24iOjF9.Er21BNgJ3jJXLuZtpubTYq9wCwO1i_VLQFwS5ET0e4eAYVVj0aOA40I5FvP5pZac3LjkCnVacxzsFWGCYVmnDA - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 86.225 name: Exact Match - type: f1 value: 92.483 name: F1 - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 29.900 name: Exact Match - type: f1 value: 41.183 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 79.071 name: Exact Match - type: f1 value: 84.472 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 70.733 name: Exact Match - type: f1 value: 83.958 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 82.011 name: Exact Match - type: f1 value: 91.092 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 84.203 name: Exact Match - type: f1 value: 91.521 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 72.029 name: Exact Match - type: f1 value: 83.454 name: F1 --- # roberta-base distilled for Extractive QA ## Overview **Language model:** deepset/roberta-base-squad2-distilled **Language:** English **Training data:** SQuAD 2.0 training set **Eval data:** SQuAD 2.0 dev set **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 4x V100 GPU **Published**: Dec 8th, 2021 ## Details - haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model. ## Hyperparameters ``` batch_size = 80 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1.5 distillation_loss_weight = 0.75 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/roberta-base-squad2-distilled") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2-distilled" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance ``` "exact": 79.8366040596311 "f1": 83.916407079888 ``` ## Authors **Timo Möller:** timo.moeller@deepset.ai **Julian Risch:** julian.risch@deepset.ai **Malte Pietsch:** malte.pietsch@deepset.ai **Michel Bartels:** michel.bartels@deepset.ai ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
EleutherAI/pythia-14m
EleutherAI
"2023-07-26T17:38:10Z"
297,769
19
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-19T13:57:54Z"
Entry not found
finiteautomata/bertweet-base-sentiment-analysis
finiteautomata
"2023-02-17T02:17:31Z"
297,425
147
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "sentiment-analysis", "en", "arxiv:2106.09462", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - en tags: - sentiment-analysis --- # Sentiment Analysis in English ## bertweet-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet](https://github.com/VinAIResearch/BERTweet), a RoBERTa model trained on English tweets. Uses `POS`, `NEG`, `NEU` labels. ## License `pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses. 1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php) 2. [SEMEval 2017 Dataset license]() ## Citation If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462) ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Enjoy! 🤗
google/vit-huge-patch14-224-in21k
google
"2024-02-14T13:43:41Z"
296,149
18
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "vit", "image-feature-extraction", "vision", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (huge-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-huge-patch14-224-in21k') model = ViTModel.from_pretrained('google/vit-huge-patch14-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
google/gemma-7b
google
"2024-06-27T14:09:40Z"
296,031
3,050
transformers
[ "transformers", "safetensors", "gguf", "gemma", "text-generation", "arxiv:2305.14314", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-08T22:36:43Z"
--- library_name: transformers license: gemma extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-7b) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Context Length Models are trained on a context length of 8192 tokens. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning examples You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314) * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb). #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b") model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 49.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | 12.5 | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **45.0** | **56.9** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
openart-custom/DucHaiten-AIart-SDXL_v3
openart-custom
"2024-09-30T16:59:14Z"
295,384
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-09-30T16:56:59Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EleutherAI/gpt-j-6b
EleutherAI
"2023-06-21T14:33:36Z"
295,284
1,441
transformers
[ "transformers", "pytorch", "tf", "jax", "gptj", "text-generation", "causal-lm", "en", "dataset:EleutherAI/pile", "arxiv:2104.09864", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - EleutherAI/pile --- # GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### Out-of-scope use GPT-J-6B is **not** intended for deployment without fine-tuning, supervision, and/or moderation. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. GPT-J-6B was trained on an English-language only dataset, and is thus **not** suitable for translation or generating text in other languages. GPT-J-6B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means GPT-J-6B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
mistralai/Mistral-7B-v0.3
mistralai
"2024-07-24T14:00:32Z"
295,122
387
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-22T09:56:38Z"
--- license: apache-2.0 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>. --- # Model Card for Mistral-7B-v0.3 The Mistral-7B-v0.3 Large Language Model (LLM) is a Mistral-7B-v0.2 with extended vocabulary. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2/edit/main/README.md) - Extended vocabulary to 32768 ## Installation It is recommended to use `mistralai/Mistral-7B-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-v0.3') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Mistral-7B-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Demo After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment. ``` mistral-demo $HOME/mistral_models/7B-v0.3 ``` Should give something along the following lines: ``` This is a test of the emergency broadcast system. This is only a test. If this were a real emergency, you would be told what to do. This is a test ===================== This is another test of the new blogging software. I’m not sure if I’m going to keep it or not. I’m not sure if I’m going to keep ===================== This is a third test, mistral AI is very good at testing. 🙂 This is a third test, mistral AI is very good at testing. 🙂 This ===================== ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.3" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("Hello my name is", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
laion
"2024-01-16T21:26:28Z"
294,265
236
open_clip
[ "open_clip", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.04867", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-01-23T07:12:35Z"
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model Card for CLIP ViT-bigG/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-bigG/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Mitchell Wortsman on the [stability.ai](https://stability.ai/) cluster. The license for this model is MIT. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). Fine-tuning was also partially done on LAION-A, a 900M subset of LAION-2B filtered with aesthetic V2 4.5+ and phash deduplicated. **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure The training procedure will soon be discussed by a blog post on laion.ai. # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 80.1 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, and will soon be visible at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` Scaling OpenCLIP paper ``` @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
mpoyraz/wav2vec2-xls-r-300m-cv7-turkish
mpoyraz
"2022-03-23T18:28:32Z"
293,623
6
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "tr", "dataset:mozilla-foundation/common_voice_7_0", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- license: cc-by-4.0 language: tr tags: - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundation/common_voice_7_0 - robust-speech-event - tr datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: mpoyraz/wav2vec2-xls-r-300m-cv7-turkish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: tr metrics: - name: Test WER type: wer value: 8.62 - name: Test CER type: cer value: 2.26 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: tr metrics: - name: Test WER type: wer value: 30.87 - name: Test CER type: cer value: 10.69 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: tr metrics: - name: Test WER type: wer value: 32.09 --- # wav2vec2-xls-r-300m-cv7-turkish ## Model description This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language. ## Training and evaluation data The following datasets were used for finetuning: - [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) All `validated` split except `test` split was used for training. - [MediaSpeech](https://www.openslr.org/108/) ## Training procedure To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose. ### Training hyperparameters The following hypermaters were used for finetuning: - learning_rate 2e-4 - num_train_epochs 10 - warmup_steps 500 - freeze_feature_extractor - mask_time_prob 0.1 - mask_feature_prob 0.05 - feat_proj_dropout 0.05 - attention_dropout 0.05 - final_dropout 0.05 - activation_dropout 0.05 - per_device_train_batch_size 8 - per_device_eval_batch_size 8 - gradient_accumulation_steps 8 ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3 ## Language Model N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format. ## Evaluation Commands Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing. 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset mozilla-foundation/common_voice_7_0 --config tr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Evaluation results: | Dataset | WER | CER | |---|---|---| |Common Voice 7 TR test split| 8.62 | 2.26 | |Speech Recognition Community dev data| 30.87 | 10.69 |
prompthero/openjourney-v4
prompthero
"2023-05-15T22:41:59Z"
292,699
1,224
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-12-11T17:37:55Z"
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image pinned: true --- # <u>Openjourney v4</u> ## Trained on +124k Midjourney v4 images, by [PromptHero](https://prompthero.com/?utm_source=huggingface&utm_medium=referral) Trained on Stable Diffusion v1.5 using +124000 images, 12400 steps, 4 epochs +32 training hours. 💡 [Openjourney-v4 prompts](https://prompthero.com/openjourney-prompts?version=4) Pss... "mdjrny-v4 style" is not necessary anymore (yay!) 🎓 **Want to learn how to train Openjourney? 👉🏼 __[Join our course](https://prompthero.com/academy/dreambooth-stable-diffusion-train-fine-tune-course?utm_source=huggingface&utm_medium=referral)__ 🔥** <img src="https://s3.us-east-1.amazonaws.com/prompthero-newsletter/Group-66.png" alt="openjourney-v4" width="50%"> # Openjourney Links - [Lora version](https://huggingface.co/prompthero/openjourney-lora) - [Openjourney Dreambooth](https://huggingface.co/prompthero/openjourney)
ml6team/keyphrase-extraction-distilbert-inspec
ml6team
"2023-05-06T08:45:37Z"
291,912
26
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "keyphrase-extraction", "en", "dataset:midas/inspec", "arxiv:2112.08547", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-25T08:52:01Z"
--- language: en license: mit tags: - keyphrase-extraction datasets: - midas/inspec metrics: - seqeval widget: - text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text." example_title: "Example 1" - text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks." example_title: "Example 2" model-index: - name: DeDeckerThomas/keyphrase-extraction-distilbert-inspec results: - task: type: keyphrase-extraction name: Keyphrase Extraction dataset: type: midas/inspec name: inspec metrics: - type: F1 (Seqeval) value: 0.509 name: F1 (Seqeval) - type: F1@M value: 0.490 name: F1@M --- # 🔑 Keyphrase Extraction Model: distilbert-inspec Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. ## 📓 Model Description This model uses [distilbert](https://huggingface.co/distilbert-base-uncased) as its base model and fine-tunes it on the [Inspec dataset](https://huggingface.co/datasets/midas/inspec). Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not. | Label | Description | | ----- | ------------------------------- | | B-KEY | At the beginning of a keyphrase | | I-KEY | Inside a keyphrase | | O | Outside a keyphrase | Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021). Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020. ## ✋ Intended Uses & Limitations ### 🛑 Limitations * This keyphrase extraction model is very domain-specific and will perform very well on abstracts of scientific papers. It's not recommended to use this model for other domains, but you are free to test it out. * Only works for English documents. ### ❓ How To Use ```python from transformers import ( TokenClassificationPipeline, AutoModelForTokenClassification, AutoTokenizer, ) from transformers.pipelines import AggregationStrategy import numpy as np # Define keyphrase extraction pipeline class KeyphraseExtractionPipeline(TokenClassificationPipeline): def __init__(self, model, *args, **kwargs): super().__init__( model=AutoModelForTokenClassification.from_pretrained(model), tokenizer=AutoTokenizer.from_pretrained(model), *args, **kwargs ) def postprocess(self, all_outputs): results = super().postprocess( all_outputs=all_outputs, aggregation_strategy=AggregationStrategy.FIRST, ) return np.unique([result.get("word").strip() for result in results]) ``` ```python # Load pipeline model_name = "ml6team/keyphrase-extraction-distilbert-inspec" extractor = KeyphraseExtractionPipeline(model=model_name) ``` ```python # Inference text = """ Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. """.replace("\n", " ") keyphrases = extractor(text) print(keyphrases) ``` ``` # Output ['artificial intelligence' 'classical machine learning' 'deep learning' 'keyphrase extraction' 'linguistic features' 'statistical' 'text analysis'] ``` ## 📚 Training Dataset [Inspec](https://huggingface.co/datasets/midas/inspec) is a keyphrase extraction/generation dataset consisting of 2000 English scientific papers from the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The keyphrases are annotated by professional indexers or editors. You can find more information in the [paper](https://dl.acm.org/doi/10.3115/1119355.1119383). ## 👷‍♂️ Training Procedure ### Training Parameters | Parameter | Value | | --------- | ------| | Learning Rate | 1e-4 | | Epochs | 50 | | Early Stopping Patience | 3 | ### Preprocessing The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens. ```python from datasets import load_dataset from transformers import AutoTokenizer # Labels label_list = ["B", "I", "O"] lbl2idx = {"B": 0, "I": 1, "O": 2} idx2label = {0: "B", 1: "I", 2: "O"} # Tokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") max_length = 512 # Dataset parameters dataset_full_name = "midas/inspec" dataset_subset = "raw" dataset_document_column = "document" dataset_biotags_column = "doc_bio_tags" def preprocess_fuction(all_samples_per_split): tokenized_samples = tokenizer.batch_encode_plus( all_samples_per_split[dataset_document_column], padding="max_length", truncation=True, is_split_into_words=True, max_length=max_length, ) total_adjusted_labels = [] for k in range(0, len(tokenized_samples["input_ids"])): prev_wid = -1 word_ids_list = tokenized_samples.word_ids(batch_index=k) existing_label_ids = all_samples_per_split[dataset_biotags_column][k] i = -1 adjusted_label_ids = [] for wid in word_ids_list: if wid is None: adjusted_label_ids.append(lbl2idx["O"]) elif wid != prev_wid: i = i + 1 adjusted_label_ids.append(lbl2idx[existing_label_ids[i]]) prev_wid = wid else: adjusted_label_ids.append( lbl2idx[ f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}" ] ) total_adjusted_labels.append(adjusted_label_ids) tokenized_samples["labels"] = total_adjusted_labels return tokenized_samples # Load dataset dataset = load_dataset(dataset_full_name, dataset_subset) # Preprocess dataset tokenized_dataset = dataset.map(preprocess_fuction, batched=True) ``` ### Postprocessing (Without Pipeline Function) If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed. ```python # Define post_process functions def concat_tokens_by_tag(keyphrases): keyphrase_tokens = [] for id, label in keyphrases: if label == "B": keyphrase_tokens.append([id]) elif label == "I": if len(keyphrase_tokens) > 0: keyphrase_tokens[len(keyphrase_tokens) - 1].append(id) return keyphrase_tokens def extract_keyphrases(example, predictions, tokenizer, index=0): keyphrases_list = [ (id, idx2label[label]) for id, label in zip( np.array(example["input_ids"]).squeeze().tolist(), predictions[index] ) if idx2label[label] in ["B", "I"] ] processed_keyphrases = concat_tokens_by_tag(keyphrases_list) extracted_kps = tokenizer.batch_decode( processed_keyphrases, skip_special_tokens=True, clean_up_tokenization_spaces=True, ) return np.unique([kp.strip() for kp in extracted_kps]) ``` ## 📝 Evaluation Results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. The model achieves the following results on the Inspec test set: | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:| | Inspec Test Set | 0.45 | 0.40 | 0.39 | 0.33 | 0.53 | 0.38 | 0.47 | 0.57 | 0.49 | ## 🚨 Issues Please feel free to start discussions in the Community Tab.
nvidia/NV-Embed-v1
nvidia
"2024-11-07T01:06:09Z"
291,648
408
sentence-transformers
[ "sentence-transformers", "safetensors", "nvembed", "mteb", "custom_code", "en", "arxiv:2210.07316", "arxiv:2405.17428", "license:cc-by-nc-4.0", "model-index", "region:us" ]
null
"2024-05-23T01:20:16Z"
--- tags: - mteb - sentence-transformers model-index: - name: NV-Embed-v1 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 95.11940298507461 - type: ap value: 79.21521293687752 - type: f1 value: 92.45575440759485 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.143125 - type: ap value: 95.28635983806933 - type: f1 value: 97.1426073127198 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.465999999999994 - type: f1 value: 52.70196166254287 - task: type: Retrieval dataset: type: mteb/arguana name: MTEB ArguAna config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 44.879000000000005 - type: map_at_10 value: 60.146 - type: map_at_100 value: 60.533 - type: map_at_1000 value: 60.533 - type: map_at_3 value: 55.725 - type: map_at_5 value: 58.477999999999994 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 44.879000000000005 - type: ndcg_at_10 value: 68.205 - type: ndcg_at_100 value: 69.646 - type: ndcg_at_1000 value: 69.65599999999999 - type: ndcg_at_3 value: 59.243 - type: ndcg_at_5 value: 64.214 - type: precision_at_1 value: 44.879000000000005 - type: precision_at_10 value: 9.374 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.139000000000003 - type: precision_at_5 value: 16.302 - type: recall_at_1 value: 44.879000000000005 - type: recall_at_10 value: 93.741 - type: recall_at_100 value: 99.57300000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 69.417 - type: recall_at_5 value: 81.50800000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 53.76391569504432 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 49.589284930659005 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.49860736554155 - type: mrr value: 80.77771182341819 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.87900681188576 - type: cos_sim_spearman value: 85.5905044545741 - type: euclidean_pearson value: 86.80150192033507 - type: euclidean_spearman value: 85.5905044545741 - type: manhattan_pearson value: 86.79080500635683 - type: manhattan_spearman value: 85.69351885001977 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 90.33766233766235 - type: f1 value: 90.20736178753944 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 48.152262077598465 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 44.742970683037235 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 31.825333333333326 - type: map_at_10 value: 44.019999999999996 - type: map_at_100 value: 45.37291666666667 - type: map_at_1000 value: 45.46991666666666 - type: map_at_3 value: 40.28783333333333 - type: map_at_5 value: 42.39458333333334 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 37.79733333333333 - type: ndcg_at_10 value: 50.50541666666667 - type: ndcg_at_100 value: 55.59125 - type: ndcg_at_1000 value: 57.06325 - type: ndcg_at_3 value: 44.595666666666666 - type: ndcg_at_5 value: 47.44875 - type: precision_at_1 value: 37.79733333333333 - type: precision_at_10 value: 9.044083333333333 - type: precision_at_100 value: 1.3728333333333336 - type: precision_at_1000 value: 0.16733333333333333 - type: precision_at_3 value: 20.842166666666667 - type: precision_at_5 value: 14.921916666666668 - type: recall_at_1 value: 31.825333333333326 - type: recall_at_10 value: 65.11916666666666 - type: recall_at_100 value: 86.72233333333335 - type: recall_at_1000 value: 96.44200000000001 - type: recall_at_3 value: 48.75691666666667 - type: recall_at_5 value: 56.07841666666666 - task: type: Retrieval dataset: type: mteb/climate-fever name: MTEB ClimateFEVER config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 14.698 - type: map_at_10 value: 25.141999999999996 - type: map_at_100 value: 27.1 - type: map_at_1000 value: 27.277 - type: map_at_3 value: 21.162 - type: map_at_5 value: 23.154 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 32.704 - type: ndcg_at_10 value: 34.715 - type: ndcg_at_100 value: 41.839 - type: ndcg_at_1000 value: 44.82 - type: ndcg_at_3 value: 28.916999999999998 - type: ndcg_at_5 value: 30.738 - type: precision_at_1 value: 32.704 - type: precision_at_10 value: 10.795 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 21.564 - type: precision_at_5 value: 16.261 - type: recall_at_1 value: 14.698 - type: recall_at_10 value: 41.260999999999996 - type: recall_at_100 value: 65.351 - type: recall_at_1000 value: 81.759 - type: recall_at_3 value: 26.545999999999996 - type: recall_at_5 value: 32.416 - task: type: Retrieval dataset: type: mteb/dbpedia name: MTEB DBPedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.959 - type: map_at_10 value: 23.104 - type: map_at_100 value: 33.202 - type: map_at_1000 value: 35.061 - type: map_at_3 value: 15.911 - type: map_at_5 value: 18.796 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 63.5 - type: ndcg_at_10 value: 48.29 - type: ndcg_at_100 value: 52.949999999999996 - type: ndcg_at_1000 value: 60.20100000000001 - type: ndcg_at_3 value: 52.92 - type: ndcg_at_5 value: 50.375 - type: precision_at_1 value: 73.75 - type: precision_at_10 value: 38.65 - type: precision_at_100 value: 12.008000000000001 - type: precision_at_1000 value: 2.409 - type: precision_at_3 value: 56.083000000000006 - type: precision_at_5 value: 48.449999999999996 - type: recall_at_1 value: 9.959 - type: recall_at_10 value: 28.666999999999998 - type: recall_at_100 value: 59.319 - type: recall_at_1000 value: 81.973 - type: recall_at_3 value: 17.219 - type: recall_at_5 value: 21.343999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 91.705 - type: f1 value: 87.98464515154814 - task: type: Retrieval dataset: type: mteb/fever name: MTEB FEVER config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 74.297 - type: map_at_10 value: 83.931 - type: map_at_100 value: 84.152 - type: map_at_1000 value: 84.164 - type: map_at_3 value: 82.708 - type: map_at_5 value: 83.536 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 80.048 - type: ndcg_at_10 value: 87.77000000000001 - type: ndcg_at_100 value: 88.467 - type: ndcg_at_1000 value: 88.673 - type: ndcg_at_3 value: 86.003 - type: ndcg_at_5 value: 87.115 - type: precision_at_1 value: 80.048 - type: precision_at_10 value: 10.711 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 33.248 - type: precision_at_5 value: 20.744 - type: recall_at_1 value: 74.297 - type: recall_at_10 value: 95.402 - type: recall_at_100 value: 97.97 - type: recall_at_1000 value: 99.235 - type: recall_at_3 value: 90.783 - type: recall_at_5 value: 93.55499999999999 - task: type: Retrieval dataset: type: mteb/fiqa name: MTEB FiQA2018 config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.986 - type: map_at_10 value: 55.173 - type: map_at_100 value: 57.077 - type: map_at_1000 value: 57.176 - type: map_at_3 value: 48.182 - type: map_at_5 value: 52.303999999999995 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 62.037 - type: ndcg_at_10 value: 63.096 - type: ndcg_at_100 value: 68.42200000000001 - type: ndcg_at_1000 value: 69.811 - type: ndcg_at_3 value: 58.702 - type: ndcg_at_5 value: 60.20100000000001 - type: precision_at_1 value: 62.037 - type: precision_at_10 value: 17.269000000000002 - type: precision_at_100 value: 2.309 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 38.992 - type: precision_at_5 value: 28.610999999999997 - type: recall_at_1 value: 32.986 - type: recall_at_10 value: 70.61800000000001 - type: recall_at_100 value: 89.548 - type: recall_at_1000 value: 97.548 - type: recall_at_3 value: 53.400000000000006 - type: recall_at_5 value: 61.29599999999999 - task: type: Retrieval dataset: type: mteb/hotpotqa name: MTEB HotpotQA config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 41.357 - type: map_at_10 value: 72.91499999999999 - type: map_at_100 value: 73.64699999999999 - type: map_at_1000 value: 73.67899999999999 - type: map_at_3 value: 69.113 - type: map_at_5 value: 71.68299999999999 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 82.714 - type: ndcg_at_10 value: 79.92 - type: ndcg_at_100 value: 82.232 - type: ndcg_at_1000 value: 82.816 - type: ndcg_at_3 value: 74.875 - type: ndcg_at_5 value: 77.969 - type: precision_at_1 value: 82.714 - type: precision_at_10 value: 17.037 - type: precision_at_100 value: 1.879 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 49.471 - type: precision_at_5 value: 32.124 - type: recall_at_1 value: 41.357 - type: recall_at_10 value: 85.18599999999999 - type: recall_at_100 value: 93.964 - type: recall_at_1000 value: 97.765 - type: recall_at_3 value: 74.207 - type: recall_at_5 value: 80.31099999999999 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 97.05799999999998 - type: ap value: 95.51324940484382 - type: f1 value: 97.05788617110184 - task: type: Retrieval dataset: type: mteb/msmarco name: MTEB MSMARCO config: default split: test revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.608999999999998 - type: map_at_10 value: 39.098 - type: map_at_100 value: 0 - type: map_at_1000 value: 0 - type: map_at_3 value: 0 - type: map_at_5 value: 37.383 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 26.404 - type: ndcg_at_10 value: 46.493 - type: ndcg_at_100 value: 0 - type: ndcg_at_1000 value: 0 - type: ndcg_at_3 value: 0 - type: ndcg_at_5 value: 42.459 - type: precision_at_1 value: 26.404 - type: precision_at_10 value: 7.249 - type: precision_at_100 value: 0 - type: precision_at_1000 value: 0 - type: precision_at_3 value: 0 - type: precision_at_5 value: 11.874 - type: recall_at_1 value: 25.608999999999998 - type: recall_at_10 value: 69.16799999999999 - type: recall_at_100 value: 0 - type: recall_at_1000 value: 0 - type: recall_at_3 value: 0 - type: recall_at_5 value: 56.962 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.50706794345645 - type: f1 value: 96.3983656000426 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 89.77428180574556 - type: f1 value: 70.47378359921777 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.07061197041023 - type: f1 value: 77.8633288994029 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.74176193678547 - type: f1 value: 79.8943810025071 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 39.239199736486334 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.98167653792483 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.815595271130718 - type: mrr value: 31.892823243368795 - task: type: Retrieval dataset: type: mteb/nfcorpus name: MTEB NFCorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.214 - type: map_at_10 value: 14.393 - type: map_at_100 value: 18.163999999999998 - type: map_at_1000 value: 19.753999999999998 - type: map_at_3 value: 10.737 - type: map_at_5 value: 12.325 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 48.297000000000004 - type: ndcg_at_10 value: 38.035000000000004 - type: ndcg_at_100 value: 34.772 - type: ndcg_at_1000 value: 43.631 - type: ndcg_at_3 value: 44.252 - type: ndcg_at_5 value: 41.307 - type: precision_at_1 value: 50.15500000000001 - type: precision_at_10 value: 27.647 - type: precision_at_100 value: 8.824 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 40.97 - type: precision_at_5 value: 35.17 - type: recall_at_1 value: 6.214 - type: recall_at_10 value: 18.566 - type: recall_at_100 value: 34.411 - type: recall_at_1000 value: 67.331 - type: recall_at_3 value: 12.277000000000001 - type: recall_at_5 value: 14.734 - task: type: Retrieval dataset: type: mteb/nq name: MTEB NQ config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 47.11 - type: map_at_10 value: 64.404 - type: map_at_100 value: 65.005 - type: map_at_1000 value: 65.01400000000001 - type: map_at_3 value: 60.831 - type: map_at_5 value: 63.181 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 52.983999999999995 - type: ndcg_at_10 value: 71.219 - type: ndcg_at_100 value: 73.449 - type: ndcg_at_1000 value: 73.629 - type: ndcg_at_3 value: 65.07 - type: ndcg_at_5 value: 68.715 - type: precision_at_1 value: 52.983999999999995 - type: precision_at_10 value: 10.756 - type: precision_at_100 value: 1.198 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 28.977999999999998 - type: precision_at_5 value: 19.583000000000002 - type: recall_at_1 value: 47.11 - type: recall_at_10 value: 89.216 - type: recall_at_100 value: 98.44500000000001 - type: recall_at_1000 value: 99.744 - type: recall_at_3 value: 73.851 - type: recall_at_5 value: 82.126 - task: type: Retrieval dataset: type: mteb/quora name: MTEB QuoraRetrieval config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 71.641 - type: map_at_10 value: 85.687 - type: map_at_100 value: 86.304 - type: map_at_1000 value: 86.318 - type: map_at_3 value: 82.811 - type: map_at_5 value: 84.641 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 82.48 - type: ndcg_at_10 value: 89.212 - type: ndcg_at_100 value: 90.321 - type: ndcg_at_1000 value: 90.405 - type: ndcg_at_3 value: 86.573 - type: ndcg_at_5 value: 88.046 - type: precision_at_1 value: 82.48 - type: precision_at_10 value: 13.522 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.95 - type: precision_at_5 value: 24.932000000000002 - type: recall_at_1 value: 71.641 - type: recall_at_10 value: 95.91499999999999 - type: recall_at_100 value: 99.63300000000001 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.248 - type: recall_at_5 value: 92.428 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.19631707795757 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 68.01353074322002 - task: type: Retrieval dataset: type: mteb/scidocs name: MTEB SCIDOCS config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.67 - type: map_at_10 value: 11.991999999999999 - type: map_at_100 value: 14.263 - type: map_at_1000 value: 14.59 - type: map_at_3 value: 8.468 - type: map_at_5 value: 10.346 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 23.1 - type: ndcg_at_10 value: 20.19 - type: ndcg_at_100 value: 28.792 - type: ndcg_at_1000 value: 34.406 - type: ndcg_at_3 value: 19.139 - type: ndcg_at_5 value: 16.916 - type: precision_at_1 value: 23.1 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 2.2849999999999997 - type: precision_at_1000 value: 0.363 - type: precision_at_3 value: 17.9 - type: precision_at_5 value: 14.979999999999999 - type: recall_at_1 value: 4.67 - type: recall_at_10 value: 21.21 - type: recall_at_100 value: 46.36 - type: recall_at_1000 value: 73.72999999999999 - type: recall_at_3 value: 10.865 - type: recall_at_5 value: 15.185 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 84.31392081916142 - type: cos_sim_spearman value: 82.80375234068289 - type: euclidean_pearson value: 81.4159066418654 - type: euclidean_spearman value: 82.80377112831907 - type: manhattan_pearson value: 81.48376861134983 - type: manhattan_spearman value: 82.86696725667119 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.1940844467158 - type: cos_sim_spearman value: 76.22474792649982 - type: euclidean_pearson value: 79.87714243582901 - type: euclidean_spearman value: 76.22462054296349 - type: manhattan_pearson value: 80.19242023327877 - type: manhattan_spearman value: 76.53202564089719 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.58028303401805 - type: cos_sim_spearman value: 86.30355131725051 - type: euclidean_pearson value: 85.9027489087145 - type: euclidean_spearman value: 86.30352515906158 - type: manhattan_pearson value: 85.74953930990678 - type: manhattan_spearman value: 86.21878393891001 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.92370135244734 - type: cos_sim_spearman value: 82.09196894621044 - type: euclidean_pearson value: 81.83198023906334 - type: euclidean_spearman value: 82.09196482328333 - type: manhattan_pearson value: 81.8951479497964 - type: manhattan_spearman value: 82.2392819738236 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.05662816919057 - type: cos_sim_spearman value: 87.24083005603993 - type: euclidean_pearson value: 86.54673655650183 - type: euclidean_spearman value: 87.24083428218053 - type: manhattan_pearson value: 86.51248710513431 - type: manhattan_spearman value: 87.24796986335883 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.06330254316376 - type: cos_sim_spearman value: 84.76788840323285 - type: euclidean_pearson value: 84.15438606134029 - type: euclidean_spearman value: 84.76788840323285 - type: manhattan_pearson value: 83.97986968570088 - type: manhattan_spearman value: 84.52468572953663 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.08627867173213 - type: cos_sim_spearman value: 87.41531216247836 - type: euclidean_pearson value: 87.92912483282956 - type: euclidean_spearman value: 87.41531216247836 - type: manhattan_pearson value: 87.85418528366228 - type: manhattan_spearman value: 87.32655499883539 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 70.74143864859911 - type: cos_sim_spearman value: 69.84863549051433 - type: euclidean_pearson value: 71.07346533903932 - type: euclidean_spearman value: 69.84863549051433 - type: manhattan_pearson value: 71.32285810342451 - type: manhattan_spearman value: 70.13063960824287 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.05702492574339 - type: cos_sim_spearman value: 86.13895001731495 - type: euclidean_pearson value: 85.86694514265486 - type: euclidean_spearman value: 86.13895001731495 - type: manhattan_pearson value: 85.96382530570494 - type: manhattan_spearman value: 86.30950247235928 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.26225076335467 - type: mrr value: 96.60696329813977 - task: type: Retrieval dataset: type: mteb/scifact name: MTEB SciFact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 64.494 - type: map_at_10 value: 74.102 - type: map_at_100 value: 74.571 - type: map_at_1000 value: 74.58 - type: map_at_3 value: 71.111 - type: map_at_5 value: 73.184 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 67.667 - type: ndcg_at_10 value: 78.427 - type: ndcg_at_100 value: 80.167 - type: ndcg_at_1000 value: 80.41 - type: ndcg_at_3 value: 73.804 - type: ndcg_at_5 value: 76.486 - type: precision_at_1 value: 67.667 - type: precision_at_10 value: 10.167 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.222 - type: precision_at_5 value: 18.867 - type: recall_at_1 value: 64.494 - type: recall_at_10 value: 90.422 - type: recall_at_100 value: 97.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 78.278 - type: recall_at_5 value: 84.828 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82772277227723 - type: cos_sim_ap value: 95.93881941923254 - type: cos_sim_f1 value: 91.12244897959184 - type: cos_sim_precision value: 93.02083333333333 - type: cos_sim_recall value: 89.3 - type: dot_accuracy value: 99.82772277227723 - type: dot_ap value: 95.93886287716076 - type: dot_f1 value: 91.12244897959184 - type: dot_precision value: 93.02083333333333 - type: dot_recall value: 89.3 - type: euclidean_accuracy value: 99.82772277227723 - type: euclidean_ap value: 95.93881941923253 - type: euclidean_f1 value: 91.12244897959184 - type: euclidean_precision value: 93.02083333333333 - type: euclidean_recall value: 89.3 - type: manhattan_accuracy value: 99.83366336633664 - type: manhattan_ap value: 96.07286531485964 - type: manhattan_f1 value: 91.34912461380021 - type: manhattan_precision value: 94.16135881104034 - type: manhattan_recall value: 88.7 - type: max_accuracy value: 99.83366336633664 - type: max_ap value: 96.07286531485964 - type: max_f1 value: 91.34912461380021 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 74.98877944689897 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.0365286267706 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 56.5797777961647 - type: mrr value: 57.57701754944402 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.673216240991756 - type: cos_sim_spearman value: 31.198648165051225 - type: dot_pearson value: 30.67321511262982 - type: dot_spearman value: 31.198648165051225 - task: type: Retrieval dataset: type: mteb/trec-covid name: MTEB TRECCOVID config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.23500000000000001 - type: map_at_10 value: 2.274 - type: map_at_100 value: 14.002 - type: map_at_1000 value: 34.443 - type: map_at_3 value: 0.705 - type: map_at_5 value: 1.162 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 88 - type: ndcg_at_10 value: 85.883 - type: ndcg_at_100 value: 67.343 - type: ndcg_at_1000 value: 59.999 - type: ndcg_at_3 value: 87.70400000000001 - type: ndcg_at_5 value: 85.437 - type: precision_at_1 value: 92 - type: precision_at_10 value: 91.2 - type: precision_at_100 value: 69.19999999999999 - type: precision_at_1000 value: 26.6 - type: precision_at_3 value: 92.667 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.23500000000000001 - type: recall_at_10 value: 2.409 - type: recall_at_100 value: 16.706 - type: recall_at_1000 value: 56.396 - type: recall_at_3 value: 0.734 - type: recall_at_5 value: 1.213 - task: type: Retrieval dataset: type: mteb/touche2020 name: MTEB Touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.4819999999999998 - type: map_at_10 value: 10.985 - type: map_at_100 value: 17.943 - type: map_at_1000 value: 19.591 - type: map_at_3 value: 5.86 - type: map_at_5 value: 8.397 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 37.755 - type: ndcg_at_10 value: 28.383000000000003 - type: ndcg_at_100 value: 40.603 - type: ndcg_at_1000 value: 51.469 - type: ndcg_at_3 value: 32.562000000000005 - type: ndcg_at_5 value: 31.532 - type: precision_at_1 value: 38.775999999999996 - type: precision_at_10 value: 24.898 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.582 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 2.4819999999999998 - type: recall_at_10 value: 17.079 - type: recall_at_100 value: 51.406 - type: recall_at_1000 value: 84.456 - type: recall_at_3 value: 6.802 - type: recall_at_5 value: 10.856 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 92.5984 - type: ap value: 41.969971606260906 - type: f1 value: 78.95995145145926 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 80.63950198075835 - type: f1 value: 80.93345710055597 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 60.13491858535076 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.42325803182929 - type: cos_sim_ap value: 78.72789856051176 - type: cos_sim_f1 value: 71.83879093198993 - type: cos_sim_precision value: 68.72289156626506 - type: cos_sim_recall value: 75.25065963060686 - type: dot_accuracy value: 87.42325803182929 - type: dot_ap value: 78.72789755269454 - type: dot_f1 value: 71.83879093198993 - type: dot_precision value: 68.72289156626506 - type: dot_recall value: 75.25065963060686 - type: euclidean_accuracy value: 87.42325803182929 - type: euclidean_ap value: 78.7278973892869 - type: euclidean_f1 value: 71.83879093198993 - type: euclidean_precision value: 68.72289156626506 - type: euclidean_recall value: 75.25065963060686 - type: manhattan_accuracy value: 87.59015318590929 - type: manhattan_ap value: 78.99631410090865 - type: manhattan_f1 value: 72.11323565929972 - type: manhattan_precision value: 68.10506566604127 - type: manhattan_recall value: 76.62269129287598 - type: max_accuracy value: 87.59015318590929 - type: max_ap value: 78.99631410090865 - type: max_f1 value: 72.11323565929972 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.15473279776458 - type: cos_sim_ap value: 86.05463278065247 - type: cos_sim_f1 value: 78.63797449855686 - type: cos_sim_precision value: 74.82444552596816 - type: cos_sim_recall value: 82.86110255620572 - type: dot_accuracy value: 89.15473279776458 - type: dot_ap value: 86.05463366261054 - type: dot_f1 value: 78.63797449855686 - type: dot_precision value: 74.82444552596816 - type: dot_recall value: 82.86110255620572 - type: euclidean_accuracy value: 89.15473279776458 - type: euclidean_ap value: 86.05463195314907 - type: euclidean_f1 value: 78.63797449855686 - type: euclidean_precision value: 74.82444552596816 - type: euclidean_recall value: 82.86110255620572 - type: manhattan_accuracy value: 89.15861373074087 - type: manhattan_ap value: 86.08743411620402 - type: manhattan_f1 value: 78.70125023325248 - type: manhattan_precision value: 76.36706018686174 - type: manhattan_recall value: 81.18263012011087 - type: max_accuracy value: 89.15861373074087 - type: max_ap value: 86.08743411620402 - type: max_f1 value: 78.70125023325248 language: - en license: cc-by-nc-4.0 --- ## Introduction We introduce NV-Embed, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://arxiv.org/abs/2210.07316))(as of May 24, 2024), with 56 tasks, encompassing retrieval, reranking, classification, clustering, and semantic textual similarity tasks. Notably, our model also achieves the highest score of 59.36 on 15 retrieval tasks within this benchmark. NV-Embed presents several new designs, including having the LLM attend to latent vectors for better pooled embedding output, and demonstrating a two-stage instruction tuning method to enhance the accuracy of both retrieval and non-retrieval tasks. For more technical details, refer to our paper: [NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models](https://arxiv.org/pdf/2405.17428). For more benchmark results (other than MTEB), please find the [AIR-Bench](https://huggingface.co/spaces/AIR-Bench/leaderboard) for QA (English only) and Long-Doc. ## Model Details - Base Decoder-only LLM: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Pooling Type: Latent-Attention - Embedding Dimension: 4096 ## How to use Here is an example of how to encode queries and passages using Huggingface-transformer and Sentence-transformer. Please find the required package version [here](https://huggingface.co/nvidia/NV-Embed-v1#2-required-packages). ### Usage (HuggingFace Transformers) ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passage_prefix = "" passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = AutoModel.from_pretrained('nvidia/NV-Embed-v1', trust_remote_code=True) # get the embeddings max_length = 4096 query_embeddings = model.encode(queries, instruction=query_prefix, max_length=max_length) passage_embeddings = model.encode(passages, instruction=passage_prefix, max_length=max_length) # normalize embeddings query_embeddings = F.normalize(query_embeddings, p=2, dim=1) passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1) # get the embeddings with DataLoader (spliting the datasets into multiple mini-batches) # batch_size=2 # query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=query_prefix, max_length=max_length, num_workers=32, return_numpy=True) # passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction=passage_prefix, max_length=max_length, num_workers=32, return_numpy=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) #[[77.9402084350586, 0.4248958230018616], [3.757718086242676, 79.60113525390625]] ``` ### Usage (Sentence-Transformers) ```python import torch from sentence_transformers import SentenceTransformer # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = SentenceTransformer('nvidia/NV-Embed-v1', trust_remote_code=True) model.max_seq_length = 4096 model.tokenizer.padding_side="right" def add_eos(input_examples): input_examples = [input_example + model.tokenizer.eos_token for input_example in input_examples] return input_examples # get the embeddings batch_size = 2 query_embeddings = model.encode(add_eos(queries), batch_size=batch_size, prompt=query_prefix, normalize_embeddings=True) passage_embeddings = model.encode(add_eos(passages), batch_size=batch_size, normalize_embeddings=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) ``` ## Correspondence to Chankyu Lee (chankyul@nvidia.com), Rajarshi Roy (rajarshir@nvidia.com), Wei Ping (wping@nvidia.com) ## Citation If you find this code useful in your research, please consider citing: ```bibtex @misc{lee2024nvembed, title={NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models}, author={Chankyu Lee and Rajarshi Roy and Mengyao Xu and Jonathan Raiman and Mohammad Shoeybi and Bryan Catanzaro and Wei Ping}, year={2024}, eprint={2405.17428}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License This model should not be used for any commercial purpose. Refer the [license](https://spdx.org/licenses/CC-BY-NC-4.0) for the detailed terms. For commercial purpose, we recommend you to use the models of [NeMo Retriever Microservices (NIMs)](https://build.nvidia.com/explore/retrieval). ## Troubleshooting #### 1. How to enable Multi-GPU (Note, this is the case for HuggingFace Transformers) ```python from transformers import AutoModel from torch.nn import DataParallel embedding_model = AutoModel.from_pretrained("nvidia/NV-Embed-v1") for module_key, module in embedding_model._modules.items(): embedding_model._modules[module_key] = DataParallel(module) ``` #### 2. Required Packages If you have trouble, try installing the python packages as below ```python pip uninstall -y transformer-engine pip install torch==2.2.0 pip install transformers==4.42.4 pip install flash-attn==2.2.0 pip install sentence-transformers==2.7.0 ``` #### 3. Fixing "nvidia/NV-Embed-v1 is not the path to a directory containing a file named config.json" Switch to your local model path,and open config.json and change the value of **"_name_or_path"** and replace it with your local model path. #### 4. Access to model nvidia/NV-Embed-v1 is restricted. You must be authenticated to access it Use your huggingface access [token](https://huggingface.co/settings/tokens) to execute *"huggingface-cli login"*. #### 5. How to resolve slight mismatch in Sentence transformer results. A slight mismatch in the Sentence Transformer implementation is caused by a discrepancy in the calculation of the instruction prefix length within the Sentence Transformer package. To fix this issue, you need to build the Sentence Transformer package from source, making the necessary modification in this [line](https://github.com/UKPLab/sentence-transformers/blob/v2.7-release/sentence_transformers/SentenceTransformer.py#L353) as below. ```python git clone https://github.com/UKPLab/sentence-transformers.git cd sentence-transformers git checkout v2.7-release # Modify L353 in SentenceTransformer.py to **'extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1]'**. pip install -e . ```
microsoft/trocr-large-printed
microsoft
"2024-05-27T20:09:18Z"
289,245
134
transformers
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "image-text-to-text", "trocr", "image-to-text", "arxiv:2109.10282", "endpoints_compatible", "region:us" ]
image-to-text
"2022-03-02T23:29:05Z"
--- tags: - trocr - image-to-text widget: - src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X00016469612_1.jpg example_title: Printed 1 - src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X51005255805_7.jpg example_title: Printed 2 - src: https://layoutlm.blob.core.windows.net/trocr/dataset/SROIE2019Task2Crop/train/X51005745214_6.jpg example_title: Printed 3 --- # TrOCR (large-sized model, fine-tuned on SROIE) TrOCR model fine-tuned on the [SROIE dataset](https://rrc.cvc.uab.es/?ch=13). It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr). Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens. ## Intended uses & limitations You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel from PIL import Image import requests # load image from the IAM database (actually this model is meant to be used on printed text) url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed') pixel_values = processor(images=image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### BibTeX entry and citation info ```bibtex @misc{li2021trocr, title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models}, author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei}, year={2021}, eprint={2109.10282}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jonatasgrosman/wav2vec2-xls-r-1b-portuguese
jonatasgrosman
"2022-12-14T02:02:02Z"
289,213
10
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "pt", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - pt - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R Wav2Vec2 Portuguese by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: pt metrics: - name: Test WER type: wer value: 8.7 - name: Test CER type: cer value: 2.55 - name: Test WER (+LM) type: wer value: 6.04 - name: Test CER (+LM) type: cer value: 1.98 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: pt metrics: - name: Dev WER type: wer value: 24.23 - name: Dev CER type: cer value: 11.3 - name: Dev WER (+LM) type: wer value: 19.41 - name: Dev CER (+LM) type: cer value: 10.19 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: pt metrics: - name: Test WER type: wer value: 18.8 --- # Fine-tuned XLS-R 1B model for speech recognition in Portuguese Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Portuguese using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [CORAA](https://github.com/nilc-nlp/CORAA), [Multilingual TEDx](http://www.openslr.org/100), and [Multilingual LibriSpeech](https://www.openslr.org/94/). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) ## Usage Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-portuguese") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "pt" MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-portuguese" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) ``` ## Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset mozilla-foundation/common_voice_8_0 --config pt --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset speech-recognition-community-v2/dev_data --config pt --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr-1b-portuguese, title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {P}ortuguese}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese}}, year={2022} } ```