modelId
stringlengths
4
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
392M
likes
int64
0
6.56k
library_name
stringclasses
368 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
1M
brunopio/Llama3-8B-1.58-100B-tokens-GGUF
brunopio
"2024-09-19T16:53:01Z"
1,252,742
9
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "base_model:HF1BitLLM/Llama3-8B-1.58-100B-tokens", "base_model:quantized:HF1BitLLM/Llama3-8B-1.58-100B-tokens", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitnet", "region:us" ]
text-generation
"2024-09-19T15:40:43Z"
--- library_name: transformers base_model: - meta-llama/Meta-Llama-3-8B-Instruct - HF1BitLLM/Llama3-8B-1.58-100B-tokens --- # Model Card for Model ID ### Llama3-8B-1.58 Models This model was converted to GGUF format from [HF1BitLLM/Llama3-8B-1.58-100B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-100B-tokens) using llama.cpp. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo brunopio/Llama3-8B-1.58-100B-tokens-GGUF --hf-file Llama3-8B-1.58-100B-tokens-GGUF -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo brunopio/Llama3-8B-1.58-100B-tokens-GGUF --hf-file Llama3-8B-1.58-100B-tokens-GGUF -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo brunopio/Llama3-8B-1.58-100B-tokens-GGUF --hf-file Llama3-8B-1.58-100B-tokens-GGUF -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo brunopio/Llama3-8B-1.58-100B-tokens-GGUF --hf-file Llama3-8B-1.58-100B-tokens-GGUF -c 2048 ```
MaziyarPanahi/gemma-2-2b-it-GGUF
MaziyarPanahi
"2024-08-01T08:01:55Z"
1,252,456
4
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:google/gemma-2-2b-it", "base_model:quantized:google/gemma-2-2b-it", "region:us", "imatrix", "conversational" ]
text-generation
"2024-08-01T07:46:41Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: gemma-2-2b-it-GGUF base_model: google/gemma-2-2b-it inference: false model_creator: google pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/gemma-2-2b-it-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2-2b-it-GGUF) - Model creator: [google](https://huggingface.co/google) - Original model: [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) ## Description [MaziyarPanahi/gemma-2-2b-it-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2-2b-it-GGUF) contains GGUF format model files for [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Llama-3-8B-Instruct-v0.10-GGUF
MaziyarPanahi
"2024-06-04T21:06:02Z"
1,249,532
2
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama-3", "llama", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.10", "base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-v0.10", "region:us", "imatrix", "conversational" ]
text-generation
"2024-06-04T19:52:45Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama-3 - llama - text-generation model_name: Llama-3-8B-Instruct-v0.10-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.10 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-v0.10-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.10-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-v0.10](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.10) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-v0.10-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.10-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-v0.10](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.10). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Phi-3.5-mini-instruct-GGUF
MaziyarPanahi
"2024-08-20T20:24:18Z"
1,247,683
4
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "region:us", "imatrix", "conversational" ]
text-generation
"2024-08-20T20:07:57Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Phi-3.5-mini-instruct-GGUF base_model: microsoft/Phi-3.5-mini-instruct inference: false model_creator: microsoft pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Phi-3.5-mini-instruct-GGUF](https://huggingface.co/MaziyarPanahi/Phi-3.5-mini-instruct-GGUF) - Model creator: [microsoft](https://huggingface.co/microsoft) - Original model: [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) ## Description [MaziyarPanahi/Phi-3.5-mini-instruct-GGUF](https://huggingface.co/MaziyarPanahi/Phi-3.5-mini-instruct-GGUF) contains GGUF format model files for [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Yi-1.5-6B-Chat-GGUF
MaziyarPanahi
"2024-05-12T20:34:51Z"
1,247,577
9
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "conversational", "arxiv:2403.04652", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:01-ai/Yi-1.5-6B-Chat", "base_model:quantized:01-ai/Yi-1.5-6B-Chat" ]
text-generation
"2024-05-12T20:19:22Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - conversational - arxiv:2403.04652 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Yi-1.5-6B-Chat-GGUF base_model: 01-ai/Yi-1.5-6B-Chat inference: false model_creator: 01-ai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-1.5-6B-Chat-GGUF) - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [01-ai/Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) ## Description [MaziyarPanahi/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-1.5-6B-Chat-GGUF) contains GGUF format model files for [01-ai/Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
jondurbin/unstuffer-v0.2
jondurbin
"2024-09-19T20:09:36Z"
1,245,915
0
null
[ "safetensors", "roberta", "license:mit", "region:us" ]
null
"2024-09-19T20:05:59Z"
--- license: mit ---
MaziyarPanahi/Yi-Coder-9B-Chat-GGUF
MaziyarPanahi
"2024-09-04T16:22:07Z"
1,245,737
2
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:01-ai/Yi-Coder-9B-Chat", "base_model:quantized:01-ai/Yi-Coder-9B-Chat", "region:us", "imatrix", "conversational" ]
text-generation
"2024-09-04T14:25:06Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Yi-Coder-9B-Chat-GGUF base_model: 01-ai/Yi-Coder-9B-Chat inference: false model_creator: 01-ai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Yi-Coder-9B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-9B-Chat-GGUF) - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [01-ai/Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) ## Description [MaziyarPanahi/Yi-Coder-9B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-9B-Chat-GGUF) contains GGUF format model files for [01-ai/Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF
MaziyarPanahi
"2024-07-17T09:25:57Z"
1,245,631
5
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:Groq/Llama-3-Groq-8B-Tool-Use", "base_model:quantized:Groq/Llama-3-Groq-8B-Tool-Use", "region:us", "imatrix", "conversational" ]
text-generation
"2024-07-17T08:47:20Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Llama-3-Groq-8B-Tool-Use-GGUF base_model: Groq/Llama-3-Groq-8B-Tool-Use inference: false model_creator: Groq pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF) - Model creator: [Groq](https://huggingface.co/Groq) - Original model: [Groq/Llama-3-Groq-8B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use) ## Description [MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF) contains GGUF format model files for [Groq/Llama-3-Groq-8B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/calme-2.3-legalkit-8b-GGUF
MaziyarPanahi
"2024-08-07T10:57:23Z"
1,245,061
8
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:MaziyarPanahi/calme-2.3-legalkit-8b", "base_model:quantized:MaziyarPanahi/calme-2.3-legalkit-8b", "region:us", "imatrix", "conversational" ]
text-generation
"2024-08-07T09:47:45Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: calme-2.3-legalkit-8b-GGUF base_model: MaziyarPanahi/calme-2.3-legalkit-8b inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/calme-2.3-legalkit-8b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/calme-2.3-legalkit-8b](https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b) ## Description [MaziyarPanahi/calme-2.3-legalkit-8b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b-GGUF) contains GGUF format model files for [MaziyarPanahi/calme-2.3-legalkit-8b](https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF
MaziyarPanahi
"2024-07-23T17:40:37Z"
1,244,696
14
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us", "imatrix", "conversational" ]
text-generation
"2024-07-23T16:17:10Z"
--- language: - en - de - fr - it - pt - hi - es - th tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Meta-Llama-3.1-8B-Instruct-GGUF base_model: meta-llama/Meta-Llama-3.1-8B-Instruct inference: false model_creator: meta-llama pipeline_tag: text-generation quantized_by: MaziyarPanahi license: llama3.1 --- # [MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF) - Model creator: [meta-llama](https://huggingface.co/meta-llama) - Original model: [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) ## Description [MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF) contains GGUF format model files for [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. Original README: --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>41.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
MaziyarPanahi/mathstral-7B-v0.1-GGUF
MaziyarPanahi
"2024-07-16T16:54:49Z"
1,244,382
6
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:mistralai/Mathstral-7B-v0.1", "base_model:quantized:mistralai/Mathstral-7B-v0.1", "region:us", "imatrix" ]
text-generation
"2024-07-16T15:06:23Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: mathstral-7B-v0.1-GGUF base_model: mistralai/mathstral-7B-v0.1 inference: false model_creator: mistralai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mathstral-7B-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mathstral-7B-v0.1-GGUF) - Model creator: [mistralai](https://huggingface.co/mistralai) - Original model: [mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1) ## Description [MaziyarPanahi/mathstral-7B-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mathstral-7B-v0.1-GGUF) contains GGUF format model files for [mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. --- **Original README** # Model Card for Mathstral-7B-v0.1 Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B. You can read more in the [official blog post](https://mistral.ai/news/mathstral/). ## Installation It is recommended to use `mistralai/mathstral-7B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference) ``` pip install mistral_inference>=1.2.0 ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'mathstral-7B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/mathstral-7B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/mathstral-7B-v0.1 --instruct --max_tokens 256 ``` You can then start chatting with the model, *e.g.* prompt it with something like: *"Albert likes to surf every week. Each surfing session lasts for 4 hours and costs $20 per hour. How much would Albert spend in 5 weeks?"* ## Evaluation We evaluate Mathstral 7B and open-weight models of the similar size on industry-standard benchmarks. | Benchmarks | MATH | GSM8K (8-shot) | Odyssey Math maj@16 | GRE Math maj@16 | AMC 2023 maj@16 | AIME 2024 maj@16 | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | Mathstral 7B | **56.6** | 77.1 | **37.2** | 56.9 | **42.4** | **2/30** | | DeepSeek Math 7B | 44.4 | **80.6** | 27.6 | 44.6 | 28.0 | 0/30 | | Llama3 8B | 28.4 | 75.4 | 24.0 | 26.2 | 34.4 | 0/30 | | GLM4 9B | 50.2 | 48.8 | 18.9 | 46.2 | 36.0 | 1/30 | | QWen2 7B | **56.8** | 32.7 | 24.8 | **58.5** | 35.2 | **2/30** | | Gemma2 9B | 48.3 | 69.5 | 18.6 | 52.3 | 31.2 | 1/30 | ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF
MaziyarPanahi
"2024-05-30T18:07:10Z"
1,243,626
2
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama-3", "llama", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.9", "base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-v0.9", "region:us", "imatrix", "conversational" ]
text-generation
"2024-05-30T14:33:03Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama-3 - llama - text-generation model_name: Llama-3-8B-Instruct-v0.9-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/firefunction-v2-GGUF
MaziyarPanahi
"2024-06-20T08:48:24Z"
1,243,593
14
transformers
[ "transformers", "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "conversational", "function-calling", "text-generation-inference", "region:us", "base_model:fireworks-ai/llama-3-firefunction-v2", "base_model:quantized:fireworks-ai/llama-3-firefunction-v2", "license:llama3", "imatrix" ]
text-generation
"2024-06-19T12:47:26Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - text-generation - conversational - function-calling - text-generation-inference - region:us - text-generation model_name: MaziyarPanahi/firefunction-v2-GGUF base_model: fireworks-ai/firefunction-v2 inference: false model_creator: fireworks-ai pipeline_tag: text-generation quantized_by: MaziyarPanahi license: llama3 --- # [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF) - Model creator: [fireworks-ai](https://huggingface.co/fireworks-ai) - Original model: [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2) ## Description [MaziyarPanahi/firefunction-v2-GGUF](https://huggingface.co/MaziyarPanahi/firefunction-v2-GGUF) contains GGUF format model files for [fireworks-ai/firefunction-v2](https://huggingface.co/fireworks-ai/firefunction-v2). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. Original README --- # FireFunction V2: Fireworks Function Calling Model [**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A) <img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/> FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights: **Comparison with other models:** - Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations - Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench - Significant quality improvements over FireFunction v1 across the broad range of metrics **General info:** 🐾 Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model 🔆 Support of parallel function calling (unlike FireFunction v1) and good instruction following 💡 Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed
MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF
MaziyarPanahi
"2024-09-04T14:38:02Z"
1,243,521
4
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:01-ai/Yi-Coder-1.5B-Chat", "base_model:quantized:01-ai/Yi-Coder-1.5B-Chat", "region:us", "imatrix", "conversational" ]
text-generation
"2024-09-04T14:24:50Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Yi-Coder-1.5B-Chat-GGUF base_model: 01-ai/Yi-Coder-1.5B-Chat inference: false model_creator: 01-ai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF) - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [01-ai/Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) ## Description [MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF](https://huggingface.co/MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF) contains GGUF format model files for [01-ai/Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF
MaziyarPanahi
"2024-04-25T19:58:11Z"
1,243,118
12
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k", "base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-64k", "region:us", "conversational" ]
text-generation
"2024-04-25T19:22:27Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama - llama-3 - text-generation model_name: Llama-3-8B-Instruct-64k-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-64k inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
google/flan-t5-large
google
"2023-07-17T12:49:05Z"
1,242,146
604
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "arxiv:2210.11416", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-10-21T10:07:08Z"
--- language: - en - fr - ro - de - multilingual widget: - text: "Translate to German: My name is Arthur" example_title: "Translation" - text: "Please answer to the following question. Who is going to be the next Ballon d'or?" example_title: "Question Answering" - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering." example_title: "Logical reasoning" - text: "Please answer the following question. What is the boiling point of Nitrogen?" example_title: "Scientific knowledge" - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?" example_title: "Yes/no question" - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?" example_title: "Reasoning task" - text: "Q: ( False or not False or False ) is? A: Let's think step by step" example_title: "Boolean Expressions" - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" example_title: "Math reasoning" - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?" example_title: "Premise and hypothesis" tags: - text2text-generation datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 --- # Model Card for FLAN-T5 large <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2210.11416.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5) # Usage Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> # Uses ## Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. ## Ethical considerations and risks > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. ## Known Limitations > Flan-T5 has not been tested in real world applications. ## Sensitive Use: > Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech. # Training Details ## Training Data The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2): ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png) ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf). ## Results For full results for FLAN-T5-Large, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11416, doi = {10.48550/ARXIV.2210.11416}, url = {https://arxiv.org/abs/2210.11416}, author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Scaling Instruction-Finetuned Language Models}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
sentence-transformers/all-roberta-large-v1
sentence-transformers
"2024-11-05T15:36:29Z"
1,220,546
55
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "openvino", "roberta", "fill-mask", "feature-extraction", "sentence-similarity", "transformers", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # all-roberta-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-roberta-large-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-roberta-large-v1') model = AutoModel.from_pretrained('sentence-transformers/all-roberta-large-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-roberta-large-v1) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`roberta-large`](https://huggingface.co/roberta-large). Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 400k steps using a batch size of 256 (32 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
Qwen/Qwen2-VL-7B-Instruct
Qwen
"2024-09-21T08:38:21Z"
1,220,349
813
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "multimodal", "conversational", "en", "arxiv:2409.12191", "arxiv:2308.12966", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-08-28T09:03:13Z"
--- license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal library_name: transformers --- # Qwen2-VL-7B-Instruct ## Introduction We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. ### What’s New in Qwen2-VL? #### Key Enhancements: * **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. * **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. * **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. * **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. #### Model Architecture Updates: * **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience. <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/> <p> * **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities. <p align="center"> <img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/> <p> We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL). ## Evaluation ### Image Benchmarks | Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 | | DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** | | InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** | | ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 | | TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** | | OCRBench | 794 | **852** | 785 | 845 | | MTVQA | - | - | -| **26.3** | | VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** | | VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** | | RealWorldQA | 64.4 | - | - | **70.1** | | MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 | | MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** | | MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 | | MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** | | MMT-Bench<sub>test</sub> | - | - | - |**63.7** | | MMStar | **61.5** | 57.5 | 54.8 | 60.7 | | MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 | | HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** | | MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 | | MathVision | - | - | - | **16.3** | ### Video Benchmarks | Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** | | :--- | :---: | :---: | :---: | :---: | | MVBench | 66.4 | 56.7 | - | **67.0** | | PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** | | EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** | | Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** | ## Requirements The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error: ``` KeyError: 'qwen2_vl' ``` ## Quickstart We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command: ```bash pip install qwen-vl-utils ``` Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`: ```python from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2-VL-7B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage. # min_pixels = 256*28*28 # max_pixels = 1280*28*28 # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) messages = [ { "role": "user", "content": [ { "type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference: Generation of the output generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` <details> <summary>Without qwen_vl_utils</summary> ```python from PIL import Image import requests import torch from torchvision import io from typing import Dict from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor # Load the model in half-precision on the available device(s) model = Qwen2VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct") # Image url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" image = Image.open(requests.get(url, stream=True).raw) conversation = [ { "role": "user", "content": [ { "type": "image", }, {"type": "text", "text": "Describe this image."}, ], } ] # Preprocess the inputs text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n' inputs = processor( text=[text_prompt], images=[image], padding=True, return_tensors="pt" ) inputs = inputs.to("cuda") # Inference: Generation of the output output_ids = model.generate(**inputs, max_new_tokens=128) generated_ids = [ output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids) ] output_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(output_text) ``` </details> <details> <summary>Multi image inference</summary> ```python # Messages containing multiple images and a text query messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "Identify the similarities between these images."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Video inference</summary> ```python # Messages containing a images list as a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": [ "file:///path/to/frame1.jpg", "file:///path/to/frame2.jpg", "file:///path/to/frame3.jpg", "file:///path/to/frame4.jpg", ], "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Messages containing a video and a text query messages = [ { "role": "user", "content": [ { "type": "video", "video": "file:///path/to/video1.mp4", "max_pixels": 360 * 420, "fps": 1.0, }, {"type": "text", "text": "Describe this video."}, ], } ] # Preparation for inference text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` </details> <details> <summary>Batch inference</summary> ```python # Sample messages for batch inference messages1 = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/image1.jpg"}, {"type": "image", "image": "file:///path/to/image2.jpg"}, {"type": "text", "text": "What are the common elements in these pictures?"}, ], } ] messages2 = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who are you?"}, ] # Combine messages for batch processing messages = [messages1, messages1] # Preparation for batch inference texts = [ processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) for msg in messages ] image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=texts, images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") # Batch Inference generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_texts = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_texts) ``` </details> ### More Usage Tips For input images, we support local files, base64, and URLs. For videos, we currently only support local files. ```python # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text. ## Local file path messages = [ { "role": "user", "content": [ {"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Image URL messages = [ { "role": "user", "content": [ {"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}, ], } ] ## Base64 encoded image messages = [ { "role": "user", "content": [ {"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}, ], } ] ``` #### Image Resolution for performance boost The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage. ```python min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels ) ``` Besides, We provide two methods for fine-grained control over the image size input to the model: 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28. ```python # min_pixels and max_pixels messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "resized_height": 280, "resized_width": 420, }, {"type": "text", "text": "Describe this image."}, ], } ] # resized_height and resized_width messages = [ { "role": "user", "content": [ { "type": "image", "image": "file:///path/to/your/image.jpg", "min_pixels": 50176, "max_pixels": 50176, }, {"type": "text", "text": "Describe this image."}, ], } ] ``` ## Limitations While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions: 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos. 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered. 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands. 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement. 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements. 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects. These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{Qwen2VL, title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution}, author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang}, journal={arXiv preprint arXiv:2409.12191}, year={2024} } @article{Qwen-VL, title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond}, author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren}, journal={arXiv preprint arXiv:2308.12966}, year={2023} } ```
apple/OpenELM-1_1B-Instruct
apple
"2024-07-18T20:48:10Z"
1,218,643
57
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "custom_code", "arxiv:2404.14619", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-12T21:52:12Z"
--- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-1_1B-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-1_1B-Instruct hf_model=apple/OpenELM-1_1B-Instruct # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
meta-llama/Llama-2-7b-hf
meta-llama
"2024-04-17T08:40:16Z"
1,212,522
1,787
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-13T16:16:13Z"
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
shibing624/text2vec-base-chinese
shibing624
"2024-04-03T07:03:24Z"
1,206,537
653
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "Sentence Transformers", "sentence-similarity", "zh", "dataset:shibing624/nli_zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 pipeline_tag: sentence-similarity tags: - Sentence Transformers - sentence-similarity - sentence-transformers datasets: - shibing624/nli_zh language: - zh library_name: sentence-transformers --- # shibing624/text2vec-base-chinese This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-chinese. It maps sentences to a 768 dimensional dense vector space and can be used for tasks like sentence embeddings, text matching or semantic search. ## Evaluation For an automated evaluation of this model, see the *Evaluation Benchmark*: [text2vec](https://github.com/shibing624/text2vec) - chinese text matching task: | Arch | BaseModel | Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg | QPS | |:-----------|:----------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:| | Word2Vec | word2vec | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html) | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 55.04 | 20.70 | 35.03 | 23769 | | SBERT | xlm-roberta-base | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 63.01 | 52.28 | 46.46 | 3138 | | Instructor | hfl/chinese-roberta-wwm-ext | [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) | 41.27 | 63.81 | 74.87 | 12.20 | 76.96 | 75.83 | 60.55 | 57.93 | 2980 | | CoSENT | hfl/chinese-macbert-base | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese) | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | 70.27 | 50.42 | 51.61 | 3008 | | CoSENT | hfl/chinese-lert-large | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 | 73.01 | 59.04 | 53.12 | 2092 | | CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence) | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 | 70.60 | 53.08 | 59.87 | 3089 | | CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase) | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | 63.08 | 3066 | | CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual) | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 | 68.88 | 51.17 | 53.67 | 4004 | 说明: - 结果评测指标:spearman系数 - `shibing624/text2vec-base-chinese`模型,是用CoSENT方法训练,基于`hfl/chinese-macbert-base`在中文STS-B数据训练得到,并在中文STS-B测试集评估达到较好效果,运行[examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py)代码可训练模型,模型文件已经上传HF model hub,中文通用语义匹配任务推荐使用 - `shibing624/text2vec-base-chinese-sentence`模型,是用CoSENT方法训练,基于`nghuyong/ernie-3.0-base-zh`用人工挑选后的中文STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)训练得到,并在中文各NLI测试集评估达到较好效果,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,中文s2s(句子vs句子)语义匹配任务推荐使用 - `shibing624/text2vec-base-chinese-paraphrase`模型,是用CoSENT方法训练,基于`nghuyong/ernie-3.0-base-zh`用人工挑选后的中文STS数据集[shibing624/nli-zh-all/text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset),数据集相对于[shibing624/nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset)加入了s2p(sentence to paraphrase)数据,强化了其长文本的表征能力,并在中文各NLI测试集评估达到SOTA,运行[examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py)代码可训练模型,模型文件已经上传HF model hub,中文s2p(句子vs段落)语义匹配任务推荐使用 - `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`模型是用SBERT训练,是`paraphrase-MiniLM-L12-v2`模型的多语言版本,支持中文、英文等 - `w2v-light-tencent-chinese`是腾讯词向量的Word2Vec模型,CPU加载使用,适用于中文字面匹配任务和缺少数据的冷启动情况 ## Usage (text2vec) Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed: ``` pip install -U text2vec ``` Then you can use the model like this: ```python from text2vec import SentenceModel sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] model = SentenceModel('shibing624/text2vec-base-chinese') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Install transformers: ``` pip install transformers ``` Then load model and predict: ```python from transformers import BertTokenizer, BertModel import torch # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Load model from HuggingFace Hub tokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese') model = BertModel.from_pretrained('shibing624/text2vec-base-chinese') sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Usage (sentence-transformers) [sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences. Install sentence-transformers: ``` pip install -U sentence-transformers ``` Then load model and predict: ```python from sentence_transformers import SentenceTransformer m = SentenceTransformer("shibing624/text2vec-base-chinese") sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'] sentence_embeddings = m.encode(sentences) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` CoSENT( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_mean_tokens': True}) ) ``` ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`hfl/chinese-macbert-base`](https://huggingface.co/hfl/chinese-macbert-base) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the rank loss by comparing with true pairs and false pairs. #### Hyper parameters - training dataset: https://huggingface.co/datasets/shibing624/nli_zh - max_seq_length: 128 - best epoch: 5 - sentence embedding dim: 768 ## Citing & Authors This model was trained by [text2vec](https://github.com/shibing624/text2vec). If you find this model helpful, feel free to cite: ```bibtex @software{text2vec, author = {Xu Ming}, title = {text2vec: A Tool for Text to Vector}, year = {2022}, url = {https://github.com/shibing624/text2vec}, } ```
tsmatz/xlm-roberta-ner-japanese
tsmatz
"2024-09-28T19:41:39Z"
1,205,627
19
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "ner", "bert", "ja", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-10-24T02:08:37Z"
--- language: - ja license: mit tags: - generated_from_trainer - ner - bert metrics: - f1 widget: - text: 鈴井は4月の陽気の良い日に、鈴をつけて北海道のトムラウシへと登った - text: 中国では、中国共産党による一党統治が続く base_model: xlm-roberta-base model-index: - name: xlm-roberta-ner-ja results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-ner-japanese (Japanese caption : 日本語の固有表現抽出のモデル) This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) (pre-trained cross-lingual ```RobertaModel```) trained for named entity recognition (NER) token classification. The model is fine-tuned on NER dataset provided by Stockmark Inc, in which data is collected from Japanese Wikipedia articles.<br> See [here](https://github.com/stockmarkteam/ner-wikipedia-dataset) for the license of this dataset. Each token is labeled by : | Label id | Tag | Tag in Widget | Description | |---|---|---|---| | 0 | O | (None) | others or nothing | | 1 | PER | PER | person | | 2 | ORG | ORG | general corporation organization | | 3 | ORG-P | P | political organization | | 4 | ORG-O | O | other organization | | 5 | LOC | LOC | location | | 6 | INS | INS | institution, facility | | 7 | PRD | PRD | product | | 8 | EVT | EVT | event | ## Intended uses ```python from transformers import pipeline model_name = "tsmatz/xlm-roberta-ner-japanese" classifier = pipeline("token-classification", model=model_name) result = classifier("鈴井は4月の陽気の良い日に、鈴をつけて北海道のトムラウシへと登った") print(result) ``` ## Training procedure You can download the source code for fine-tuning from [here](https://github.com/tsmatz/huggingface-finetune-japanese/blob/master/01-named-entity.ipynb). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 446 | 0.1510 | 0.8457 | | No log | 2.0 | 892 | 0.0626 | 0.9261 | | No log | 3.0 | 1338 | 0.0366 | 0.9580 | | No log | 4.0 | 1784 | 0.0196 | 0.9792 | | No log | 5.0 | 2230 | 0.0173 | 0.9864 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu102 - Datasets 2.6.1 - Tokenizers 0.13.1
google/t5-v1_1-xxl
google
"2023-01-24T16:52:41Z"
1,203,813
77
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- language: en datasets: - c4 license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 ## Version 1.1 [T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. **Note**: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?search=t5-v1_1) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF
MaziyarPanahi
"2024-04-25T17:39:58Z"
1,177,069
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2", "base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2", "region:us", "conversational" ]
text-generation
"2024-04-24T11:47:43Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama - llama-3 - text-generation model_name: Llama-3-8B-Instruct-DPO-v0.2-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.2). ## Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1-GGUF
MaziyarPanahi
"2024-04-25T17:40:11Z"
1,177,033
1
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1", "base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1", "region:us", "conversational" ]
text-generation
"2024-04-24T11:10:23Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama - llama-3 - text-generation model_name: Llama-3-8B-Instruct-DPO-v0.1-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.1). ## Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
facebook/roberta-hate-speech-dynabench-r4-target
facebook
"2023-03-16T20:03:57Z"
1,173,105
64
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "en", "arxiv:2012.15761", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-06-10T22:24:39Z"
--- language: en --- # LFTW R4 Target The R4 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761) ## Citation Information ```bibtex @inproceedings{vidgen2021lftw, title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection}, author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela}, booktitle={ACL}, year={2021} } ``` Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub!
Helsinki-NLP/opus-mt-zh-en
Helsinki-NLP
"2023-08-16T12:09:10Z"
1,165,524
447
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "zh", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - zh - en tags: - translation license: cc-by-4.0 --- ### zho-eng ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details - **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: Chinese - Target Language: English - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md) ## Training #### System Information * helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port_machine: brutasse * port_time: 2020-08-21-14:41 * src_multilingual: False * tgt_multilingual: False #### Training Data ##### Preprocessing * pre-processing: normalization + SentencePiece (spm32k,spm32k) * ref_len: 82826.0 * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip) * test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt) * brevity_penalty: 0.948 ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.zho.eng | 36.1 | 0.548 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en") ```
stabilityai/sdxl-turbo
stabilityai
"2024-07-10T11:33:43Z"
1,156,241
2,273
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "license:other", "autotrain_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-11-27T15:19:11Z"
--- pipeline_tag: text-to-image inference: false license: other license_name: sai-nc-community license_link: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.md --- # SDXL-Turbo Model Card <!-- Provide a quick summary of what the model is/does. --> ![row01](output_tile.jpg) SDXL-Turbo is a fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation. A real-time demo is available here: http://clipdrop.co/stable-diffusion-turbo Please note: For commercial use, please refer to https://stability.ai/license. ## Model Details ### Model Description SDXL-Turbo is a distilled version of [SDXL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the [technical report](https://stability.ai/research/adversarial-diffusion-distillation)), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality. This approach uses score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal and combines this with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. - **Developed by:** Stability AI - **Funded by:** Stability AI - **Model type:** Generative text-to-image model - **Finetuned from model:** [SDXL 1.0 Base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) ### Model Sources For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference). - **Repository:** https://github.com/Stability-AI/generative-models - **Paper:** https://stability.ai/research/adversarial-diffusion-distillation - **Demo:** http://clipdrop.co/stable-diffusion-turbo ## Evaluation ![comparison1](image_quality_one_step.png) ![comparison2](prompt_alignment_one_step.png) The charts above evaluate user preference for SDXL-Turbo over other single- and multi-step models. SDXL-Turbo evaluated at a single step is preferred by human voters in terms of image quality and prompt following over LCM-XL evaluated at four (or fewer) steps. In addition, we see that using four steps for SDXL-Turbo further improves performance. For details on the user study, we refer to the [research paper](https://stability.ai/research/adversarial-diffusion-distillation). ## Uses ### Direct Use The model is intended for both non-commercial and commercial usage. You can use this model for non-commercial or research purposes under this [license](https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.md). Possible research areas and tasks include - Research on generative models. - Research on real-time applications of generative models. - Research on the impact of real-time generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. For commercial use, please refer to https://stability.ai/membership. Excluded uses are described below. ### Diffusers ``` pip install diffusers transformers accelerate --upgrade ``` - **Text-to-image**: SDXL-Turbo does not make use of `guidance_scale` or `negative_prompt`, we disable it with `guidance_scale=0.0`. Preferably, the model generates images of size 512x512 but higher image sizes work as well. A **single step** is enough to generate high quality images. ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0] ``` - **Image-to-image**: When using SDXL-Turbo for image-to-image generation, make sure that `num_inference_steps` * `strength` is larger or equal to 1. The image-to-image pipeline will run for `int(num_inference_steps * strength)` steps, *e.g.* 0.5 * 2.0 = 1 step in our example below. ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image import torch pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") pipe.to("cuda") init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512)) prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0] ``` ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy). ## Limitations and Bias ### Limitations - The generated images are of a fixed resolution (512x512 pix), and the model does not achieve perfect photorealism. - The model cannot render legible text. - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for both non-commercial and commercial usage. ## How to Get Started with the Model Check out https://github.com/Stability-AI/generative-models
WhereIsAI/UAE-Large-V1
WhereIsAI
"2024-07-28T05:49:12Z"
1,143,826
211
sentence-transformers
[ "sentence-transformers", "onnx", "safetensors", "bert", "feature-extraction", "mteb", "sentence_embedding", "feature_extraction", "transformers", "transformers.js", "en", "arxiv:2309.12871", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-12-04T02:03:27Z"
--- tags: - mteb - sentence_embedding - feature_extraction - sentence-transformers - transformers - transformers.js model-index: - name: UAE-Large-V1 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.55223880597015 - type: ap value: 38.264070815317794 - type: f1 value: 69.40977934769845 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.84267499999999 - type: ap value: 89.57568507997713 - type: f1 value: 92.82590734337774 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.292 - type: f1 value: 47.90257816032778 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 42.105 - type: map_at_10 value: 58.181000000000004 - type: map_at_100 value: 58.653999999999996 - type: map_at_1000 value: 58.657000000000004 - type: map_at_3 value: 54.386 - type: map_at_5 value: 56.757999999999996 - type: mrr_at_1 value: 42.745 - type: mrr_at_10 value: 58.437 - type: mrr_at_100 value: 58.894999999999996 - type: mrr_at_1000 value: 58.897999999999996 - type: mrr_at_3 value: 54.635 - type: mrr_at_5 value: 56.99999999999999 - type: ndcg_at_1 value: 42.105 - type: ndcg_at_10 value: 66.14999999999999 - type: ndcg_at_100 value: 68.048 - type: ndcg_at_1000 value: 68.11399999999999 - type: ndcg_at_3 value: 58.477000000000004 - type: ndcg_at_5 value: 62.768 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 9.110999999999999 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.447000000000003 - type: precision_at_5 value: 16.159000000000002 - type: recall_at_1 value: 42.105 - type: recall_at_10 value: 91.11 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 70.341 - type: recall_at_5 value: 80.797 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 49.02580759154173 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.093601280163554 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.19590406875427 - type: mrr value: 77.09547992788991 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.86678362843676 - type: cos_sim_spearman value: 86.1423242570783 - type: euclidean_pearson value: 85.98994198511751 - type: euclidean_spearman value: 86.48209103503942 - type: manhattan_pearson value: 85.6446436316182 - type: manhattan_spearman value: 86.21039809734357 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.69155844155844 - type: f1 value: 87.68109381943547 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.37501687500394 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 37.23401405155885 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.232 - type: map_at_10 value: 41.404999999999994 - type: map_at_100 value: 42.896 - type: map_at_1000 value: 43.028 - type: map_at_3 value: 37.925 - type: map_at_5 value: 39.865 - type: mrr_at_1 value: 36.338 - type: mrr_at_10 value: 46.969 - type: mrr_at_100 value: 47.684 - type: mrr_at_1000 value: 47.731 - type: mrr_at_3 value: 44.063 - type: mrr_at_5 value: 45.908 - type: ndcg_at_1 value: 36.338 - type: ndcg_at_10 value: 47.887 - type: ndcg_at_100 value: 53.357 - type: ndcg_at_1000 value: 55.376999999999995 - type: ndcg_at_3 value: 42.588 - type: ndcg_at_5 value: 45.132 - type: precision_at_1 value: 36.338 - type: precision_at_10 value: 9.17 - type: precision_at_100 value: 1.4909999999999999 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 20.315 - type: precision_at_5 value: 14.793000000000001 - type: recall_at_1 value: 30.232 - type: recall_at_10 value: 60.67399999999999 - type: recall_at_100 value: 83.628 - type: recall_at_1000 value: 96.209 - type: recall_at_3 value: 45.48 - type: recall_at_5 value: 52.354 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.237 - type: map_at_10 value: 42.829 - type: map_at_100 value: 44.065 - type: map_at_1000 value: 44.199 - type: map_at_3 value: 39.885999999999996 - type: map_at_5 value: 41.55 - type: mrr_at_1 value: 40.064 - type: mrr_at_10 value: 48.611 - type: mrr_at_100 value: 49.245 - type: mrr_at_1000 value: 49.29 - type: mrr_at_3 value: 46.561 - type: mrr_at_5 value: 47.771 - type: ndcg_at_1 value: 40.064 - type: ndcg_at_10 value: 48.388 - type: ndcg_at_100 value: 52.666999999999994 - type: ndcg_at_1000 value: 54.67100000000001 - type: ndcg_at_3 value: 44.504 - type: ndcg_at_5 value: 46.303 - type: precision_at_1 value: 40.064 - type: precision_at_10 value: 9.051 - type: precision_at_100 value: 1.4500000000000002 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 21.444 - type: precision_at_5 value: 15.045 - type: recall_at_1 value: 32.237 - type: recall_at_10 value: 57.943999999999996 - type: recall_at_100 value: 75.98700000000001 - type: recall_at_1000 value: 88.453 - type: recall_at_3 value: 46.268 - type: recall_at_5 value: 51.459999999999994 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.797 - type: map_at_10 value: 51.263000000000005 - type: map_at_100 value: 52.333 - type: map_at_1000 value: 52.393 - type: map_at_3 value: 47.936 - type: map_at_5 value: 49.844 - type: mrr_at_1 value: 44.389 - type: mrr_at_10 value: 54.601 - type: mrr_at_100 value: 55.300000000000004 - type: mrr_at_1000 value: 55.333 - type: mrr_at_3 value: 52.068999999999996 - type: mrr_at_5 value: 53.627 - type: ndcg_at_1 value: 44.389 - type: ndcg_at_10 value: 57.193000000000005 - type: ndcg_at_100 value: 61.307 - type: ndcg_at_1000 value: 62.529 - type: ndcg_at_3 value: 51.607 - type: ndcg_at_5 value: 54.409 - type: precision_at_1 value: 44.389 - type: precision_at_10 value: 9.26 - type: precision_at_100 value: 1.222 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.03 - type: precision_at_5 value: 15.887 - type: recall_at_1 value: 38.797 - type: recall_at_10 value: 71.449 - type: recall_at_100 value: 88.881 - type: recall_at_1000 value: 97.52 - type: recall_at_3 value: 56.503 - type: recall_at_5 value: 63.392 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.291999999999998 - type: map_at_10 value: 35.65 - type: map_at_100 value: 36.689 - type: map_at_1000 value: 36.753 - type: map_at_3 value: 32.995000000000005 - type: map_at_5 value: 34.409 - type: mrr_at_1 value: 29.04 - type: mrr_at_10 value: 37.486000000000004 - type: mrr_at_100 value: 38.394 - type: mrr_at_1000 value: 38.445 - type: mrr_at_3 value: 35.028 - type: mrr_at_5 value: 36.305 - type: ndcg_at_1 value: 29.04 - type: ndcg_at_10 value: 40.613 - type: ndcg_at_100 value: 45.733000000000004 - type: ndcg_at_1000 value: 47.447 - type: ndcg_at_3 value: 35.339999999999996 - type: ndcg_at_5 value: 37.706 - type: precision_at_1 value: 29.04 - type: precision_at_10 value: 6.192 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 14.802000000000001 - type: precision_at_5 value: 10.305 - type: recall_at_1 value: 27.291999999999998 - type: recall_at_10 value: 54.25299999999999 - type: recall_at_100 value: 77.773 - type: recall_at_1000 value: 90.795 - type: recall_at_3 value: 39.731 - type: recall_at_5 value: 45.403999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.326 - type: map_at_10 value: 26.290999999999997 - type: map_at_100 value: 27.456999999999997 - type: map_at_1000 value: 27.583000000000002 - type: map_at_3 value: 23.578 - type: map_at_5 value: 25.113000000000003 - type: mrr_at_1 value: 22.637 - type: mrr_at_10 value: 31.139 - type: mrr_at_100 value: 32.074999999999996 - type: mrr_at_1000 value: 32.147 - type: mrr_at_3 value: 28.483000000000004 - type: mrr_at_5 value: 29.963 - type: ndcg_at_1 value: 22.637 - type: ndcg_at_10 value: 31.717000000000002 - type: ndcg_at_100 value: 37.201 - type: ndcg_at_1000 value: 40.088 - type: ndcg_at_3 value: 26.686 - type: ndcg_at_5 value: 29.076999999999998 - type: precision_at_1 value: 22.637 - type: precision_at_10 value: 5.7090000000000005 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.894 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 18.326 - type: recall_at_10 value: 43.824999999999996 - type: recall_at_100 value: 67.316 - type: recall_at_1000 value: 87.481 - type: recall_at_3 value: 29.866999999999997 - type: recall_at_5 value: 35.961999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.875 - type: map_at_10 value: 40.458 - type: map_at_100 value: 41.772 - type: map_at_1000 value: 41.882999999999996 - type: map_at_3 value: 37.086999999999996 - type: map_at_5 value: 39.153 - type: mrr_at_1 value: 36.381 - type: mrr_at_10 value: 46.190999999999995 - type: mrr_at_100 value: 46.983999999999995 - type: mrr_at_1000 value: 47.032000000000004 - type: mrr_at_3 value: 43.486999999999995 - type: mrr_at_5 value: 45.249 - type: ndcg_at_1 value: 36.381 - type: ndcg_at_10 value: 46.602 - type: ndcg_at_100 value: 51.885999999999996 - type: ndcg_at_1000 value: 53.895 - type: ndcg_at_3 value: 41.155 - type: ndcg_at_5 value: 44.182 - type: precision_at_1 value: 36.381 - type: precision_at_10 value: 8.402 - type: precision_at_100 value: 1.278 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 19.346 - type: precision_at_5 value: 14.09 - type: recall_at_1 value: 29.875 - type: recall_at_10 value: 59.065999999999995 - type: recall_at_100 value: 80.923 - type: recall_at_1000 value: 93.927 - type: recall_at_3 value: 44.462 - type: recall_at_5 value: 51.89 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.94 - type: map_at_10 value: 35.125 - type: map_at_100 value: 36.476 - type: map_at_1000 value: 36.579 - type: map_at_3 value: 31.840000000000003 - type: map_at_5 value: 33.647 - type: mrr_at_1 value: 30.936000000000003 - type: mrr_at_10 value: 40.637 - type: mrr_at_100 value: 41.471000000000004 - type: mrr_at_1000 value: 41.525 - type: mrr_at_3 value: 38.013999999999996 - type: mrr_at_5 value: 39.469 - type: ndcg_at_1 value: 30.936000000000003 - type: ndcg_at_10 value: 41.295 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 49.183 - type: ndcg_at_3 value: 35.811 - type: ndcg_at_5 value: 38.306000000000004 - type: precision_at_1 value: 30.936000000000003 - type: precision_at_10 value: 7.728 - type: precision_at_100 value: 1.226 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.237 - type: precision_at_5 value: 12.42 - type: recall_at_1 value: 24.94 - type: recall_at_10 value: 54.235 - type: recall_at_100 value: 78.314 - type: recall_at_1000 value: 93.973 - type: recall_at_3 value: 38.925 - type: recall_at_5 value: 45.505 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.250833333333333 - type: map_at_10 value: 35.46875 - type: map_at_100 value: 36.667 - type: map_at_1000 value: 36.78025 - type: map_at_3 value: 32.56733333333334 - type: map_at_5 value: 34.20333333333333 - type: mrr_at_1 value: 30.8945 - type: mrr_at_10 value: 39.636833333333335 - type: mrr_at_100 value: 40.46508333333333 - type: mrr_at_1000 value: 40.521249999999995 - type: mrr_at_3 value: 37.140166666666666 - type: mrr_at_5 value: 38.60999999999999 - type: ndcg_at_1 value: 30.8945 - type: ndcg_at_10 value: 40.93441666666667 - type: ndcg_at_100 value: 46.062416666666664 - type: ndcg_at_1000 value: 48.28341666666667 - type: ndcg_at_3 value: 35.97575 - type: ndcg_at_5 value: 38.3785 - type: precision_at_1 value: 30.8945 - type: precision_at_10 value: 7.180250000000001 - type: precision_at_100 value: 1.1468333333333334 - type: precision_at_1000 value: 0.15283333333333332 - type: precision_at_3 value: 16.525583333333334 - type: precision_at_5 value: 11.798333333333332 - type: recall_at_1 value: 26.250833333333333 - type: recall_at_10 value: 52.96108333333333 - type: recall_at_100 value: 75.45908333333334 - type: recall_at_1000 value: 90.73924999999998 - type: recall_at_3 value: 39.25483333333333 - type: recall_at_5 value: 45.37950000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.595 - type: map_at_10 value: 31.747999999999998 - type: map_at_100 value: 32.62 - type: map_at_1000 value: 32.713 - type: map_at_3 value: 29.48 - type: map_at_5 value: 30.635 - type: mrr_at_1 value: 27.607 - type: mrr_at_10 value: 34.449000000000005 - type: mrr_at_100 value: 35.182 - type: mrr_at_1000 value: 35.254000000000005 - type: mrr_at_3 value: 32.413 - type: mrr_at_5 value: 33.372 - type: ndcg_at_1 value: 27.607 - type: ndcg_at_10 value: 36.041000000000004 - type: ndcg_at_100 value: 40.514 - type: ndcg_at_1000 value: 42.851 - type: ndcg_at_3 value: 31.689 - type: ndcg_at_5 value: 33.479 - type: precision_at_1 value: 27.607 - type: precision_at_10 value: 5.66 - type: precision_at_100 value: 0.868 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 13.446 - type: precision_at_5 value: 9.264 - type: recall_at_1 value: 24.595 - type: recall_at_10 value: 46.79 - type: recall_at_100 value: 67.413 - type: recall_at_1000 value: 84.753 - type: recall_at_3 value: 34.644999999999996 - type: recall_at_5 value: 39.09 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.333000000000002 - type: map_at_10 value: 24.427 - type: map_at_100 value: 25.576 - type: map_at_1000 value: 25.692999999999998 - type: map_at_3 value: 22.002 - type: map_at_5 value: 23.249 - type: mrr_at_1 value: 20.716 - type: mrr_at_10 value: 28.072000000000003 - type: mrr_at_100 value: 29.067 - type: mrr_at_1000 value: 29.137 - type: mrr_at_3 value: 25.832 - type: mrr_at_5 value: 27.045 - type: ndcg_at_1 value: 20.716 - type: ndcg_at_10 value: 29.109 - type: ndcg_at_100 value: 34.797 - type: ndcg_at_1000 value: 37.503 - type: ndcg_at_3 value: 24.668 - type: ndcg_at_5 value: 26.552999999999997 - type: precision_at_1 value: 20.716 - type: precision_at_10 value: 5.351 - type: precision_at_100 value: 0.955 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 11.584999999999999 - type: precision_at_5 value: 8.362 - type: recall_at_1 value: 17.333000000000002 - type: recall_at_10 value: 39.604 - type: recall_at_100 value: 65.525 - type: recall_at_1000 value: 84.651 - type: recall_at_3 value: 27.199 - type: recall_at_5 value: 32.019 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.342 - type: map_at_10 value: 35.349000000000004 - type: map_at_100 value: 36.443 - type: map_at_1000 value: 36.548 - type: map_at_3 value: 32.307 - type: map_at_5 value: 34.164 - type: mrr_at_1 value: 31.063000000000002 - type: mrr_at_10 value: 39.703 - type: mrr_at_100 value: 40.555 - type: mrr_at_1000 value: 40.614 - type: mrr_at_3 value: 37.141999999999996 - type: mrr_at_5 value: 38.812000000000005 - type: ndcg_at_1 value: 31.063000000000002 - type: ndcg_at_10 value: 40.873 - type: ndcg_at_100 value: 45.896 - type: ndcg_at_1000 value: 48.205999999999996 - type: ndcg_at_3 value: 35.522 - type: ndcg_at_5 value: 38.419 - type: precision_at_1 value: 31.063000000000002 - type: precision_at_10 value: 6.866 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 16.014 - type: precision_at_5 value: 11.604000000000001 - type: recall_at_1 value: 26.342 - type: recall_at_10 value: 53.40200000000001 - type: recall_at_100 value: 75.251 - type: recall_at_1000 value: 91.13799999999999 - type: recall_at_3 value: 39.103 - type: recall_at_5 value: 46.357 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.71 - type: map_at_10 value: 32.153999999999996 - type: map_at_100 value: 33.821 - type: map_at_1000 value: 34.034 - type: map_at_3 value: 29.376 - type: map_at_5 value: 30.878 - type: mrr_at_1 value: 28.458 - type: mrr_at_10 value: 36.775999999999996 - type: mrr_at_100 value: 37.804 - type: mrr_at_1000 value: 37.858999999999995 - type: mrr_at_3 value: 34.123999999999995 - type: mrr_at_5 value: 35.596 - type: ndcg_at_1 value: 28.458 - type: ndcg_at_10 value: 37.858999999999995 - type: ndcg_at_100 value: 44.194 - type: ndcg_at_1000 value: 46.744 - type: ndcg_at_3 value: 33.348 - type: ndcg_at_5 value: 35.448 - type: precision_at_1 value: 28.458 - type: precision_at_10 value: 7.4510000000000005 - type: precision_at_100 value: 1.5 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 15.809999999999999 - type: precision_at_5 value: 11.462 - type: recall_at_1 value: 23.71 - type: recall_at_10 value: 48.272999999999996 - type: recall_at_100 value: 77.134 - type: recall_at_1000 value: 93.001 - type: recall_at_3 value: 35.480000000000004 - type: recall_at_5 value: 41.19 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.331 - type: map_at_10 value: 28.926000000000002 - type: map_at_100 value: 29.855999999999998 - type: map_at_1000 value: 29.957 - type: map_at_3 value: 26.395999999999997 - type: map_at_5 value: 27.933000000000003 - type: mrr_at_1 value: 23.105 - type: mrr_at_10 value: 31.008000000000003 - type: mrr_at_100 value: 31.819999999999997 - type: mrr_at_1000 value: 31.887999999999998 - type: mrr_at_3 value: 28.466 - type: mrr_at_5 value: 30.203000000000003 - type: ndcg_at_1 value: 23.105 - type: ndcg_at_10 value: 33.635999999999996 - type: ndcg_at_100 value: 38.277 - type: ndcg_at_1000 value: 40.907 - type: ndcg_at_3 value: 28.791 - type: ndcg_at_5 value: 31.528 - type: precision_at_1 value: 23.105 - type: precision_at_10 value: 5.323 - type: precision_at_100 value: 0.815 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.384 - type: precision_at_5 value: 9.02 - type: recall_at_1 value: 21.331 - type: recall_at_10 value: 46.018 - type: recall_at_100 value: 67.364 - type: recall_at_1000 value: 86.97 - type: recall_at_3 value: 33.395 - type: recall_at_5 value: 39.931 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 17.011000000000003 - type: map_at_10 value: 28.816999999999997 - type: map_at_100 value: 30.761 - type: map_at_1000 value: 30.958000000000002 - type: map_at_3 value: 24.044999999999998 - type: map_at_5 value: 26.557 - type: mrr_at_1 value: 38.696999999999996 - type: mrr_at_10 value: 50.464 - type: mrr_at_100 value: 51.193999999999996 - type: mrr_at_1000 value: 51.219 - type: mrr_at_3 value: 47.339999999999996 - type: mrr_at_5 value: 49.346000000000004 - type: ndcg_at_1 value: 38.696999999999996 - type: ndcg_at_10 value: 38.53 - type: ndcg_at_100 value: 45.525 - type: ndcg_at_1000 value: 48.685 - type: ndcg_at_3 value: 32.282 - type: ndcg_at_5 value: 34.482 - type: precision_at_1 value: 38.696999999999996 - type: precision_at_10 value: 11.895999999999999 - type: precision_at_100 value: 1.95 - type: precision_at_1000 value: 0.254 - type: precision_at_3 value: 24.038999999999998 - type: precision_at_5 value: 18.332 - type: recall_at_1 value: 17.011000000000003 - type: recall_at_10 value: 44.452999999999996 - type: recall_at_100 value: 68.223 - type: recall_at_1000 value: 85.653 - type: recall_at_3 value: 28.784 - type: recall_at_5 value: 35.66 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.516 - type: map_at_10 value: 21.439 - type: map_at_100 value: 31.517 - type: map_at_1000 value: 33.267 - type: map_at_3 value: 15.004999999999999 - type: map_at_5 value: 17.793999999999997 - type: mrr_at_1 value: 71.25 - type: mrr_at_10 value: 79.071 - type: mrr_at_100 value: 79.325 - type: mrr_at_1000 value: 79.33 - type: mrr_at_3 value: 77.708 - type: mrr_at_5 value: 78.546 - type: ndcg_at_1 value: 58.62500000000001 - type: ndcg_at_10 value: 44.889 - type: ndcg_at_100 value: 50.536 - type: ndcg_at_1000 value: 57.724 - type: ndcg_at_3 value: 49.32 - type: ndcg_at_5 value: 46.775 - type: precision_at_1 value: 71.25 - type: precision_at_10 value: 36.175000000000004 - type: precision_at_100 value: 11.940000000000001 - type: precision_at_1000 value: 2.178 - type: precision_at_3 value: 53.583000000000006 - type: precision_at_5 value: 45.550000000000004 - type: recall_at_1 value: 9.516 - type: recall_at_10 value: 27.028000000000002 - type: recall_at_100 value: 57.581 - type: recall_at_1000 value: 80.623 - type: recall_at_3 value: 16.313 - type: recall_at_5 value: 20.674 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.74999999999999 - type: f1 value: 46.46706502669774 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 77.266 - type: map_at_10 value: 84.89999999999999 - type: map_at_100 value: 85.109 - type: map_at_1000 value: 85.123 - type: map_at_3 value: 83.898 - type: map_at_5 value: 84.541 - type: mrr_at_1 value: 83.138 - type: mrr_at_10 value: 89.37 - type: mrr_at_100 value: 89.432 - type: mrr_at_1000 value: 89.43299999999999 - type: mrr_at_3 value: 88.836 - type: mrr_at_5 value: 89.21 - type: ndcg_at_1 value: 83.138 - type: ndcg_at_10 value: 88.244 - type: ndcg_at_100 value: 88.98700000000001 - type: ndcg_at_1000 value: 89.21900000000001 - type: ndcg_at_3 value: 86.825 - type: ndcg_at_5 value: 87.636 - type: precision_at_1 value: 83.138 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.36 - type: recall_at_1 value: 77.266 - type: recall_at_10 value: 94.063 - type: recall_at_100 value: 96.993 - type: recall_at_1000 value: 98.414 - type: recall_at_3 value: 90.228 - type: recall_at_5 value: 92.328 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.319 - type: map_at_10 value: 36.943 - type: map_at_100 value: 38.951 - type: map_at_1000 value: 39.114 - type: map_at_3 value: 32.82 - type: map_at_5 value: 34.945 - type: mrr_at_1 value: 44.135999999999996 - type: mrr_at_10 value: 53.071999999999996 - type: mrr_at_100 value: 53.87 - type: mrr_at_1000 value: 53.90200000000001 - type: mrr_at_3 value: 50.77199999999999 - type: mrr_at_5 value: 52.129999999999995 - type: ndcg_at_1 value: 44.135999999999996 - type: ndcg_at_10 value: 44.836 - type: ndcg_at_100 value: 51.754 - type: ndcg_at_1000 value: 54.36 - type: ndcg_at_3 value: 41.658 - type: ndcg_at_5 value: 42.354 - type: precision_at_1 value: 44.135999999999996 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.952 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.093 - type: recall_at_1 value: 22.319 - type: recall_at_10 value: 51.528 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 92.143 - type: recall_at_3 value: 38.641 - type: recall_at_5 value: 43.653999999999996 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 40.182 - type: map_at_10 value: 65.146 - type: map_at_100 value: 66.023 - type: map_at_1000 value: 66.078 - type: map_at_3 value: 61.617999999999995 - type: map_at_5 value: 63.82299999999999 - type: mrr_at_1 value: 80.365 - type: mrr_at_10 value: 85.79 - type: mrr_at_100 value: 85.963 - type: mrr_at_1000 value: 85.968 - type: mrr_at_3 value: 84.952 - type: mrr_at_5 value: 85.503 - type: ndcg_at_1 value: 80.365 - type: ndcg_at_10 value: 73.13499999999999 - type: ndcg_at_100 value: 76.133 - type: ndcg_at_1000 value: 77.151 - type: ndcg_at_3 value: 68.255 - type: ndcg_at_5 value: 70.978 - type: precision_at_1 value: 80.365 - type: precision_at_10 value: 15.359 - type: precision_at_100 value: 1.7690000000000001 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 44.024 - type: precision_at_5 value: 28.555999999999997 - type: recall_at_1 value: 40.182 - type: recall_at_10 value: 76.793 - type: recall_at_100 value: 88.474 - type: recall_at_1000 value: 95.159 - type: recall_at_3 value: 66.036 - type: recall_at_5 value: 71.391 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.7796 - type: ap value: 89.24883716810874 - type: f1 value: 92.7706903433313 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.016 - type: map_at_10 value: 34.408 - type: map_at_100 value: 35.592 - type: map_at_1000 value: 35.64 - type: map_at_3 value: 30.459999999999997 - type: map_at_5 value: 32.721000000000004 - type: mrr_at_1 value: 22.593 - type: mrr_at_10 value: 34.993 - type: mrr_at_100 value: 36.113 - type: mrr_at_1000 value: 36.156 - type: mrr_at_3 value: 31.101 - type: mrr_at_5 value: 33.364 - type: ndcg_at_1 value: 22.579 - type: ndcg_at_10 value: 41.404999999999994 - type: ndcg_at_100 value: 47.018 - type: ndcg_at_1000 value: 48.211999999999996 - type: ndcg_at_3 value: 33.389 - type: ndcg_at_5 value: 37.425000000000004 - type: precision_at_1 value: 22.579 - type: precision_at_10 value: 6.59 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.241000000000001 - type: precision_at_5 value: 10.59 - type: recall_at_1 value: 22.016 - type: recall_at_10 value: 62.927 - type: recall_at_100 value: 88.72 - type: recall_at_1000 value: 97.80799999999999 - type: recall_at_3 value: 41.229 - type: recall_at_5 value: 50.88 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.01732786137711 - type: f1 value: 93.76353126402202 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.91746466028272 - type: f1 value: 57.715651682646765 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.5030262273033 - type: f1 value: 74.6693629986121 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.74781439139207 - type: f1 value: 79.96684171018774 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.2156206892017 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.180539484816137 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.51125957874274 - type: mrr value: 33.777037359249995 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 7.248 - type: map_at_10 value: 15.340000000000002 - type: map_at_100 value: 19.591 - type: map_at_1000 value: 21.187 - type: map_at_3 value: 11.329 - type: map_at_5 value: 13.209999999999999 - type: mrr_at_1 value: 47.678 - type: mrr_at_10 value: 57.493 - type: mrr_at_100 value: 58.038999999999994 - type: mrr_at_1000 value: 58.07 - type: mrr_at_3 value: 55.36600000000001 - type: mrr_at_5 value: 56.635999999999996 - type: ndcg_at_1 value: 46.129999999999995 - type: ndcg_at_10 value: 38.653999999999996 - type: ndcg_at_100 value: 36.288 - type: ndcg_at_1000 value: 44.765 - type: ndcg_at_3 value: 43.553 - type: ndcg_at_5 value: 41.317 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 28.669 - type: precision_at_100 value: 9.158 - type: precision_at_1000 value: 2.207 - type: precision_at_3 value: 40.97 - type: precision_at_5 value: 35.604 - type: recall_at_1 value: 7.248 - type: recall_at_10 value: 19.46 - type: recall_at_100 value: 37.214000000000006 - type: recall_at_1000 value: 67.64099999999999 - type: recall_at_3 value: 12.025 - type: recall_at_5 value: 15.443999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 31.595000000000002 - type: map_at_10 value: 47.815999999999995 - type: map_at_100 value: 48.811 - type: map_at_1000 value: 48.835 - type: map_at_3 value: 43.225 - type: map_at_5 value: 46.017 - type: mrr_at_1 value: 35.689 - type: mrr_at_10 value: 50.341 - type: mrr_at_100 value: 51.044999999999995 - type: mrr_at_1000 value: 51.062 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.918 - type: ndcg_at_1 value: 35.66 - type: ndcg_at_10 value: 55.859 - type: ndcg_at_100 value: 59.864 - type: ndcg_at_1000 value: 60.419999999999995 - type: ndcg_at_3 value: 47.371 - type: ndcg_at_5 value: 51.995000000000005 - type: precision_at_1 value: 35.66 - type: precision_at_10 value: 9.27 - type: precision_at_100 value: 1.1520000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.63 - type: precision_at_5 value: 15.655 - type: recall_at_1 value: 31.595000000000002 - type: recall_at_10 value: 77.704 - type: recall_at_100 value: 94.774 - type: recall_at_1000 value: 98.919 - type: recall_at_3 value: 56.052 - type: recall_at_5 value: 66.623 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.489 - type: map_at_10 value: 85.411 - type: map_at_100 value: 86.048 - type: map_at_1000 value: 86.064 - type: map_at_3 value: 82.587 - type: map_at_5 value: 84.339 - type: mrr_at_1 value: 82.28 - type: mrr_at_10 value: 88.27199999999999 - type: mrr_at_100 value: 88.362 - type: mrr_at_1000 value: 88.362 - type: mrr_at_3 value: 87.372 - type: mrr_at_5 value: 87.995 - type: ndcg_at_1 value: 82.27 - type: ndcg_at_10 value: 89.023 - type: ndcg_at_100 value: 90.191 - type: ndcg_at_1000 value: 90.266 - type: ndcg_at_3 value: 86.37 - type: ndcg_at_5 value: 87.804 - type: precision_at_1 value: 82.27 - type: precision_at_10 value: 13.469000000000001 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.797 - type: precision_at_5 value: 24.734 - type: recall_at_1 value: 71.489 - type: recall_at_10 value: 95.824 - type: recall_at_100 value: 99.70599999999999 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 88.099 - type: recall_at_5 value: 92.285 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.52398807444541 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 65.34855891507871 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.188000000000001 - type: map_at_10 value: 13.987 - type: map_at_100 value: 16.438 - type: map_at_1000 value: 16.829 - type: map_at_3 value: 9.767000000000001 - type: map_at_5 value: 11.912 - type: mrr_at_1 value: 25.6 - type: mrr_at_10 value: 37.744 - type: mrr_at_100 value: 38.847 - type: mrr_at_1000 value: 38.894 - type: mrr_at_3 value: 34.166999999999994 - type: mrr_at_5 value: 36.207 - type: ndcg_at_1 value: 25.6 - type: ndcg_at_10 value: 22.980999999999998 - type: ndcg_at_100 value: 32.039 - type: ndcg_at_1000 value: 38.157000000000004 - type: ndcg_at_3 value: 21.567 - type: ndcg_at_5 value: 19.070999999999998 - type: precision_at_1 value: 25.6 - type: precision_at_10 value: 12.02 - type: precision_at_100 value: 2.5100000000000002 - type: precision_at_1000 value: 0.396 - type: precision_at_3 value: 20.333000000000002 - type: precision_at_5 value: 16.98 - type: recall_at_1 value: 5.188000000000001 - type: recall_at_10 value: 24.372 - type: recall_at_100 value: 50.934999999999995 - type: recall_at_1000 value: 80.477 - type: recall_at_3 value: 12.363 - type: recall_at_5 value: 17.203 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.24286275535398 - type: cos_sim_spearman value: 82.62333770991818 - type: euclidean_pearson value: 84.60353717637284 - type: euclidean_spearman value: 82.32990108810047 - type: manhattan_pearson value: 84.6089049738196 - type: manhattan_spearman value: 82.33361785438936 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.87428858503165 - type: cos_sim_spearman value: 79.09145886519929 - type: euclidean_pearson value: 86.42669231664036 - type: euclidean_spearman value: 80.03127375435449 - type: manhattan_pearson value: 86.41330338305022 - type: manhattan_spearman value: 80.02492538673368 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.67912277322645 - type: cos_sim_spearman value: 89.6171319711762 - type: euclidean_pearson value: 86.56571917398725 - type: euclidean_spearman value: 87.71216907898948 - type: manhattan_pearson value: 86.57459050182473 - type: manhattan_spearman value: 87.71916648349993 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.71957379085862 - type: cos_sim_spearman value: 85.01784075851465 - type: euclidean_pearson value: 84.7407848472801 - type: euclidean_spearman value: 84.61063091345538 - type: manhattan_pearson value: 84.71494352494403 - type: manhattan_spearman value: 84.58772077604254 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.40508326325175 - type: cos_sim_spearman value: 89.50912897763186 - type: euclidean_pearson value: 87.82349070086627 - type: euclidean_spearman value: 88.44179162727521 - type: manhattan_pearson value: 87.80181927025595 - type: manhattan_spearman value: 88.43205129636243 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.35846741715478 - type: cos_sim_spearman value: 86.61172476741842 - type: euclidean_pearson value: 84.60123125491637 - type: euclidean_spearman value: 85.3001948141827 - type: manhattan_pearson value: 84.56231142658329 - type: manhattan_spearman value: 85.23579900798813 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.94539129818824 - type: cos_sim_spearman value: 88.99349064256742 - type: euclidean_pearson value: 88.7142444640351 - type: euclidean_spearman value: 88.34120813505011 - type: manhattan_pearson value: 88.70363008238084 - type: manhattan_spearman value: 88.31952816956954 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 68.29910260369893 - type: cos_sim_spearman value: 68.79263346213466 - type: euclidean_pearson value: 68.41627521422252 - type: euclidean_spearman value: 66.61602587398579 - type: manhattan_pearson value: 68.49402183447361 - type: manhattan_spearman value: 66.80157792354453 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.43703906343708 - type: cos_sim_spearman value: 89.06081805093662 - type: euclidean_pearson value: 87.48311456299662 - type: euclidean_spearman value: 88.07417597580013 - type: manhattan_pearson value: 87.48202249768894 - type: manhattan_spearman value: 88.04758031111642 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.49080620485203 - type: mrr value: 96.19145378949301 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 59.317 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.738 - type: map_at_1000 value: 69.759 - type: map_at_3 value: 66.12599999999999 - type: map_at_5 value: 67.532 - type: mrr_at_1 value: 62 - type: mrr_at_10 value: 70.176 - type: mrr_at_100 value: 70.565 - type: mrr_at_1000 value: 70.583 - type: mrr_at_3 value: 67.833 - type: mrr_at_5 value: 68.93299999999999 - type: ndcg_at_1 value: 62 - type: ndcg_at_10 value: 74.069 - type: ndcg_at_100 value: 76.037 - type: ndcg_at_1000 value: 76.467 - type: ndcg_at_3 value: 68.628 - type: ndcg_at_5 value: 70.57600000000001 - type: precision_at_1 value: 62 - type: precision_at_10 value: 10 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.667 - type: precision_at_5 value: 17.4 - type: recall_at_1 value: 59.317 - type: recall_at_10 value: 87.822 - type: recall_at_100 value: 96.833 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 73.06099999999999 - type: recall_at_5 value: 77.928 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.88910891089108 - type: cos_sim_ap value: 97.236958456951 - type: cos_sim_f1 value: 94.39999999999999 - type: cos_sim_precision value: 94.39999999999999 - type: cos_sim_recall value: 94.39999999999999 - type: dot_accuracy value: 99.82574257425742 - type: dot_ap value: 94.94344759441888 - type: dot_f1 value: 91.17352056168507 - type: dot_precision value: 91.44869215291752 - type: dot_recall value: 90.9 - type: euclidean_accuracy value: 99.88415841584158 - type: euclidean_ap value: 97.2044250782305 - type: euclidean_f1 value: 94.210786739238 - type: euclidean_precision value: 93.24191968658178 - type: euclidean_recall value: 95.19999999999999 - type: manhattan_accuracy value: 99.88613861386139 - type: manhattan_ap value: 97.20683205497689 - type: manhattan_f1 value: 94.2643391521197 - type: manhattan_precision value: 94.02985074626866 - type: manhattan_recall value: 94.5 - type: max_accuracy value: 99.88910891089108 - type: max_ap value: 97.236958456951 - type: max_f1 value: 94.39999999999999 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.53940781726187 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.71865011295108 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.3218674533331 - type: mrr value: 56.28279910449028 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.723915667479673 - type: cos_sim_spearman value: 32.029070449745234 - type: dot_pearson value: 28.864944212481454 - type: dot_spearman value: 27.939266999596725 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.231 - type: map_at_10 value: 1.949 - type: map_at_100 value: 10.023 - type: map_at_1000 value: 23.485 - type: map_at_3 value: 0.652 - type: map_at_5 value: 1.054 - type: mrr_at_1 value: 86 - type: mrr_at_10 value: 92.067 - type: mrr_at_100 value: 92.067 - type: mrr_at_1000 value: 92.067 - type: mrr_at_3 value: 91.667 - type: mrr_at_5 value: 92.067 - type: ndcg_at_1 value: 83 - type: ndcg_at_10 value: 76.32900000000001 - type: ndcg_at_100 value: 54.662 - type: ndcg_at_1000 value: 48.062 - type: ndcg_at_3 value: 81.827 - type: ndcg_at_5 value: 80.664 - type: precision_at_1 value: 86 - type: precision_at_10 value: 80 - type: precision_at_100 value: 55.48 - type: precision_at_1000 value: 20.938000000000002 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.39999999999999 - type: recall_at_1 value: 0.231 - type: recall_at_10 value: 2.158 - type: recall_at_100 value: 13.344000000000001 - type: recall_at_1000 value: 44.31 - type: recall_at_3 value: 0.6779999999999999 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.524 - type: map_at_10 value: 10.183 - type: map_at_100 value: 16.625 - type: map_at_1000 value: 18.017 - type: map_at_3 value: 5.169 - type: map_at_5 value: 6.772 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 47.128 - type: mrr_at_100 value: 48.458 - type: mrr_at_1000 value: 48.473 - type: mrr_at_3 value: 44.897999999999996 - type: mrr_at_5 value: 45.306000000000004 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 24.928 - type: ndcg_at_100 value: 37.613 - type: ndcg_at_1000 value: 48.528 - type: ndcg_at_3 value: 28.829 - type: ndcg_at_5 value: 25.237 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 22.448999999999998 - type: precision_at_100 value: 8.02 - type: precision_at_1000 value: 1.537 - type: precision_at_3 value: 30.612000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.524 - type: recall_at_10 value: 16.38 - type: recall_at_100 value: 49.529 - type: recall_at_1000 value: 83.598 - type: recall_at_3 value: 6.411 - type: recall_at_5 value: 8.932 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.09020000000001 - type: ap value: 14.451710060978993 - type: f1 value: 54.7874410609049 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.745331069609506 - type: f1 value: 60.08387848592697 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.71549485462037 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.39345532574357 - type: cos_sim_ap value: 78.16796549696478 - type: cos_sim_f1 value: 71.27713276123171 - type: cos_sim_precision value: 68.3115626511853 - type: cos_sim_recall value: 74.51187335092348 - type: dot_accuracy value: 85.12248912201228 - type: dot_ap value: 69.26039256107077 - type: dot_f1 value: 65.04294321240867 - type: dot_precision value: 63.251059586138126 - type: dot_recall value: 66.93931398416886 - type: euclidean_accuracy value: 87.07754664123503 - type: euclidean_ap value: 77.7872176038945 - type: euclidean_f1 value: 70.85587801278899 - type: euclidean_precision value: 66.3519115614924 - type: euclidean_recall value: 76.01583113456465 - type: manhattan_accuracy value: 87.07754664123503 - type: manhattan_ap value: 77.7341400185556 - type: manhattan_f1 value: 70.80310880829015 - type: manhattan_precision value: 69.54198473282443 - type: manhattan_recall value: 72.1108179419525 - type: max_accuracy value: 87.39345532574357 - type: max_ap value: 78.16796549696478 - type: max_f1 value: 71.27713276123171 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.09457833663213 - type: cos_sim_ap value: 86.33024314706873 - type: cos_sim_f1 value: 78.59623733719248 - type: cos_sim_precision value: 74.13322413322413 - type: cos_sim_recall value: 83.63104404065291 - type: dot_accuracy value: 88.3086894089339 - type: dot_ap value: 83.92225241805097 - type: dot_f1 value: 76.8721826377781 - type: dot_precision value: 72.8168044077135 - type: dot_recall value: 81.40591315060055 - type: euclidean_accuracy value: 88.77052043311213 - type: euclidean_ap value: 85.7410710218755 - type: euclidean_f1 value: 77.97705489398781 - type: euclidean_precision value: 73.77713657598241 - type: euclidean_recall value: 82.68401601478288 - type: manhattan_accuracy value: 88.73753250281368 - type: manhattan_ap value: 85.72867199072802 - type: manhattan_f1 value: 77.89774182922812 - type: manhattan_precision value: 74.23787931635857 - type: manhattan_recall value: 81.93717277486911 - type: max_accuracy value: 89.09457833663213 - type: max_ap value: 86.33024314706873 - type: max_f1 value: 78.59623733719248 license: mit language: - en --- # [Universal AnglE Embedding](https://github.com/SeanLee97/AnglE) 📢 `WhereIsAI/UAE-Large-V1` **is licensed under MIT. Feel free to use it in any scenario.** **If you use it for academic papers, you could cite us via 👉 [citation info](#citation).** **🤝 Follow us on:** - GitHub: https://github.com/SeanLee97/AnglE. - Arxiv: https://arxiv.org/abs/2309.12871 (ACL24) - 📘 Document: https://angle.readthedocs.io/en/latest/index.html Welcome to using AnglE to train and infer powerful sentence embeddings. **🏆 Achievements** - 📅 May 16, 2024 | AnglE's paper is accepted by ACL 2024 Main Conference - 📅 Dec 4, 2024 | 🔥 Our universal English sentence embedding `WhereIsAI/UAE-Large-V1` achieves **SOTA** on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) with an average score of 64.64! ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/635cc29de7aef2358a9b03ee/jY3tr0DCMdyJXOihSqJFr.jpeg) **🧑‍🤝‍🧑 Siblings:** - [WhereIsAI/UAE-Code-Large-V1](https://huggingface.co/WhereIsAI/UAE-Code-Large-V1): This model can be used for code or GitHub issue similarity measurement. # Usage ## 1. angle_emb ```bash python -m pip install -U angle-emb ``` 1) Non-Retrieval Tasks There is no need to specify any prompts. ```python from angle_emb import AnglE from angle_emb.utils import cosine_similarity angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda() doc_vecs = angle.encode([ 'The weather is great!', 'The weather is very good!', 'i am going to bed' ], normalize_embedding=True) for i, dv1 in enumerate(doc_vecs): for dv2 in doc_vecs[i+1:]: print(cosine_similarity(dv1, dv2)) ``` 2) Retrieval Tasks For retrieval purposes, please use the prompt `Prompts.C` for query (not for document). ```python from angle_emb import AnglE, Prompts from angle_emb.utils import cosine_similarity angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda() qv = angle.encode(Prompts.C.format(text='what is the weather?')) doc_vecs = angle.encode([ 'The weather is great!', 'it is rainy today.', 'i am going to bed' ]) for dv in doc_vecs: print(cosine_similarity(qv[0], dv)) ``` ## 2. sentence transformer ```python from angle_emb import Prompts from sentence_transformers import SentenceTransformer model = SentenceTransformer("WhereIsAI/UAE-Large-V1").cuda() qv = model.encode(Prompts.C.format(text='what is the weather?')) doc_vecs = model.encode([ 'The weather is great!', 'it is rainy today.', 'i am going to bed' ]) for dv in doc_vecs: print(1 - spatial.distance.cosine(qv, dv)) ``` # Citation If you use our pre-trained models, welcome to support us by citing our work: ``` @article{li2023angle, title={AnglE-optimized Text Embeddings}, author={Li, Xianming and Li, Jing}, journal={arXiv preprint arXiv:2309.12871}, year={2023} } ```
nateraw/vit-age-classifier
nateraw
"2024-07-28T23:24:56Z"
1,138,586
110
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "dataset:nateraw/fairface", "doi:10.57967/hf/1259", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- tags: - image-classification - pytorch datasets: - nateraw/fairface --- A vision transformer finetuned to classify the age of a given person's face. ```python import requests from PIL import Image from io import BytesIO from transformers import ViTFeatureExtractor, ViTForImageClassification # Get example image from official fairface repo + read it in as an image r = requests.get('https://github.com/dchen236/FairFace/blob/master/detected_faces/race_Asian_face0.jpg?raw=true') im = Image.open(BytesIO(r.content)) # Init model, transforms model = ViTForImageClassification.from_pretrained('nateraw/vit-age-classifier') transforms = ViTFeatureExtractor.from_pretrained('nateraw/vit-age-classifier') # Transform our image and pass it through the model inputs = transforms(im, return_tensors='pt') output = model(**inputs) # Predicted Class probabilities proba = output.logits.softmax(1) # Predicted Classes preds = proba.argmax(1) ```
j-hartmann/emotion-english-distilroberta-base
j-hartmann
"2023-01-02T13:03:10Z"
1,137,731
351
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "distilroberta", "sentiment", "emotion", "twitter", "reddit", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: "en" tags: - distilroberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- # Emotion English DistilRoBERTa-base # Description ℹ With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets (see Appendix below) and predicts Ekman's 6 basic emotions, plus a neutral class: 1) anger 🤬 2) disgust 🤢 3) fear 😨 4) joy 😀 5) neutral 😐 6) sadness 😭 7) surprise 😲 The model is a fine-tuned checkpoint of [DistilRoBERTa-base](https://huggingface.co/distilroberta-base). For a 'non-distilled' emotion model, please refer to the model card of the [RoBERTa-large](https://huggingface.co/j-hartmann/emotion-english-roberta-large) version. # Application 🚀 a) Run emotion model with 3 lines of code on single text example using Hugging Face's pipeline command on Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/simple_emotion_pipeline.ipynb) ```python from transformers import pipeline classifier = pipeline("text-classification", model="j-hartmann/emotion-english-distilroberta-base", return_all_scores=True) classifier("I love this!") ``` ```python Output: [[{'label': 'anger', 'score': 0.004419783595949411}, {'label': 'disgust', 'score': 0.0016119900392368436}, {'label': 'fear', 'score': 0.0004138521908316761}, {'label': 'joy', 'score': 0.9771687984466553}, {'label': 'neutral', 'score': 0.005764586851000786}, {'label': 'sadness', 'score': 0.002092392183840275}, {'label': 'surprise', 'score': 0.008528684265911579}]] ``` b) Run emotion model on multiple examples and full datasets (e.g., .csv files) on Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/emotion_prediction_example.ipynb) # Contact 💻 Please reach out to [jochen.hartmann@tum.de](mailto:jochen.hartmann@tum.de) if you have any questions or feedback. Thanks to Samuel Domdey and [chrsiebert](https://huggingface.co/siebert) for their support in making this model available. # Reference ✅ For attribution, please cite the following reference if you use this model. A working paper will be available soon. ``` Jochen Hartmann, "Emotion English DistilRoBERTa-base". https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/, 2022. ``` BibTex citation: ``` @misc{hartmann2022emotionenglish, author={Hartmann, Jochen}, title={Emotion English DistilRoBERTa-base}, year={2022}, howpublished = {\url{https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/}}, } ``` # Appendix 📚 Please find an overview of the datasets used for training below. All datasets contain English text. The table summarizes which emotions are available in each of the datasets. The datasets represent a diverse collection of text types. Specifically, they contain emotion labels for texts from Twitter, Reddit, student self-reports, and utterances from TV dialogues. As MELD (Multimodal EmotionLines Dataset) extends the popular EmotionLines dataset, EmotionLines itself is not included here. |Name|anger|disgust|fear|joy|neutral|sadness|surprise| |---|---|---|---|---|---|---|---| |Crowdflower (2016)|Yes|-|-|Yes|Yes|Yes|Yes| |Emotion Dataset, Elvis et al. (2018)|Yes|-|Yes|Yes|-|Yes|Yes| |GoEmotions, Demszky et al. (2020)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |ISEAR, Vikash (2018)|Yes|Yes|Yes|Yes|-|Yes|-| |MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-| The model is trained on a balanced subset from the datasets listed above (2,811 observations per emotion, i.e., nearly 20k observations in total). 80% of this balanced subset is used for training and 20% for evaluation. The evaluation accuracy is 66% (vs. the random-chance baseline of 1/7 = 14%). # Scientific Applications 📖 Below you can find a list of papers using "Emotion English DistilRoBERTa-base". If you would like your paper to be added to the list, please send me an email. - Butt, S., Sharma, S., Sharma, R., Sidorov, G., & Gelbukh, A. (2022). What goes on inside rumour and non-rumour tweets and their reactions: A Psycholinguistic Analyses. Computers in Human Behavior, 107345. - Kuang, Z., Zong, S., Zhang, J., Chen, J., & Liu, H. (2022). Music-to-Text Synaesthesia: Generating Descriptive Text from Music Recordings. arXiv preprint arXiv:2210.00434. - Rozado, D., Hughes, R., & Halberstadt, J. (2022). Longitudinal analysis of sentiment and emotion in news media headlines using automated labelling with Transformer language models. Plos one, 17(10), e0276367.
peft-internal-testing/tiny-dummy-qwen2
peft-internal-testing
"2024-07-04T10:52:09Z"
1,134,669
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-04T10:15:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
TheBloke
"2023-09-29T20:48:48Z"
1,133,617
77
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-09-28T22:34:03Z"
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 inference: false license: apache-2.0 model_creator: Mistral AI model_name: Mistral 7B Instruct v0.1 model_type: mistral pipeline_tag: text-generation prompt_template: '<s>[INST] {prompt} [/INST]' quantized_by: TheBloke tags: - finetuned --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B Instruct v0.1 - GPTQ - Model creator: [Mistral AI](https://huggingface.co/mistralai) - Original model: [Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- description start --> ## Description This repo contains GPTQ model files for [Mistral AI's Mistral 7B Instruct v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. ### GPTQs will work in ExLlama, or via Transformers (requiring Transformers from Github) These models are confirmed to work with ExLlama v1. At the time of writing (September 28th), AutoGPTQ has not yet added support for the new Mistral models. These GPTQs were made directly from Transformers, and so can be loaded via the Transformers interface. They can't be loaded directly from AutoGPTQ. To load them via Transformers, you will need to install Transformers from Github, with: ``` pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79 ``` <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF) * [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. These files were made with Transformers 4.34.0.dev0, from commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 7.68 GB | Yes | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 8.17 GB | Yes | 8-bit, with group size 32g and Act Order for maximum inference quality. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mistral-7B-Instruct-v0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mistral-7B-Instruct-v0.1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mistral-7B-Instruct-v0.1-GPTQ`: ```shell mkdir Mistral-7B-Instruct-v0.1-GPTQ huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GPTQ --local-dir Mistral-7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistral-7B-Instruct-v0.1-GPTQ huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Mistral-7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mistral-7B-Instruct-v0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GPTQ --local-dir Mistral-7B-Instruct-v0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). These models are confirmed to work via the ExLlama Loader in text-generation-webui. Use **Loader: ExLlama** - or Transformers may work too. AutoGPTQ will not work. Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mistral-7B-Instruct-v0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mistral-7B-Instruct-v0.1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mistral-7B-Instruct-v0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.34.0.dev0 from Github source, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install optimum pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<s>[INST] {prompt} [/INST] ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are only tested to work with ExLlama v1, and Transformers 4.34.0.dev0 as of commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mistral AI's Mistral 7B Instruct v0.1 # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
stabilityai/stable-diffusion-2-1
stabilityai
"2023-07-05T16:19:17Z"
1,100,005
3,878
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-12-06T17:24:51Z"
--- license: openrail++ tags: - stable-diffusion - text-to-image pinned: true --- # Stable Diffusion v2-1 Model Card This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`. - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python import torch from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler model_id = "stabilityai/stable-diffusion-2-1" # Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://huggingface.co/runwayml/stable-diffusion-inpainting). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
MattyB95/AST-VoxCelebSpoof-Synthetic-Voice-Detection
MattyB95
"2024-01-31T15:54:22Z"
1,091,799
4
transformers
[ "transformers", "tensorboard", "safetensors", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "en", "dataset:MattyB95/VoxCelebSpoof", "base_model:MIT/ast-finetuned-audioset-10-10-0.4593", "base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593", "license:mit", "endpoints_compatible", "region:us" ]
audio-classification
"2024-01-16T03:57:32Z"
--- license: mit base_model: MIT/ast-finetuned-audioset-10-10-0.4593 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: AST-VoxCelebSpoof-Synthetic-Voice-Detection results: [] datasets: - MattyB95/VoxCelebSpoof language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AST-VoxCelebSpoof-Synthetic-Voice-Detection This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 89136693248.0 - Accuracy: 0.9999 - F1: 0.9999 - Precision: 1.0 - Recall: 0.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-----------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 2218896740319.232 | 1.0 | 29527 | 611463921664.0 | 0.9998 | 0.9998 | 0.9999 | 0.9997 | | 522149441830.912 | 2.0 | 59054 | 284563668992.0 | 0.9997 | 0.9997 | 0.9999 | 0.9996 | | 0.0 | 3.0 | 88581 | 89136693248.0 | 0.9999 | 0.9999 | 1.0 | 0.9998 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.15.0
cross-encoder/ms-marco-MiniLM-L-12-v2
cross-encoder
"2021-08-05T08:39:01Z"
1,090,528
64
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
dbmdz/bert-large-cased-finetuned-conll03-english
dbmdz
"2023-09-06T22:17:56Z"
1,081,464
68
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
Entry not found
ybelkada/tiny-random-T5ForConditionalGeneration-calibrated
ybelkada
"2023-04-05T17:16:54Z"
1,080,797
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-04-05T17:13:33Z"
A "better calibrated" tiny T5 model for testing purposes
google-bert/bert-base-multilingual-uncased
google-bert
"2024-02-19T11:06:00Z"
1,076,056
108
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
--- language: - multilingual - af - sq - ar - an - hy - ast - az - ba - eu - bar - be - bn - inc - bs - br - bg - my - ca - ceb - ce - zh - cv - hr - cs - da - nl - en - et - fi - fr - gl - ka - de - el - gu - ht - he - hi - hu - is - io - id - ga - it - ja - jv - kn - kk - ky - ko - la - lv - lt - roa - nds - lm - mk - mg - ms - ml - mr - min - ne - new - nb - nn - oc - fa - pms - pl - pt - pa - ro - ru - sco - sr - hr - scn - sk - sl - aze - es - su - sw - sv - tl - tg - ta - tt - te - tr - uk - ud - uz - vi - vo - war - cy - fry - pnb - yo license: apache-2.0 datasets: - wikipedia --- # BERT multilingual base model (uncased) Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the languages in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a top model. [SEP]", 'score': 0.1507750153541565, 'token': 11397, 'token_str': 'top'}, {'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.13075384497642517, 'token': 23589, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a good model. [SEP]", 'score': 0.036272723227739334, 'token': 12050, 'token_str': 'good'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.035954564809799194, 'token': 10246, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a great model. [SEP]", 'score': 0.028643041849136353, 'token': 11838, 'token_str': 'great'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased') model = BertModel.from_pretrained("bert-base-multilingual-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased') model = TFBertModel.from_pretrained("bert-base-multilingual-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a teacher. [SEP]', 'score': 0.07943806052207947, 'token': 21733, 'token_str': 'teacher'}, {'sequence': '[CLS] the man worked as a lawyer. [SEP]', 'score': 0.0629938617348671, 'token': 34249, 'token_str': 'lawyer'}, {'sequence': '[CLS] the man worked as a farmer. [SEP]', 'score': 0.03367974981665611, 'token': 36799, 'token_str': 'farmer'}, {'sequence': '[CLS] the man worked as a journalist. [SEP]', 'score': 0.03172805905342102, 'token': 19477, 'token_str': 'journalist'}, {'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.031021825969219208, 'token': 33241, 'token_str': 'carpenter'}] >>> unmasker("The Black woman worked as a [MASK].") [{'sequence': '[CLS] the black woman worked as a nurse. [SEP]', 'score': 0.07045423984527588, 'token': 52428, 'token_str': 'nurse'}, {'sequence': '[CLS] the black woman worked as a teacher. [SEP]', 'score': 0.05178029090166092, 'token': 21733, 'token_str': 'teacher'}, {'sequence': '[CLS] the black woman worked as a lawyer. [SEP]', 'score': 0.032601192593574524, 'token': 34249, 'token_str': 'lawyer'}, {'sequence': '[CLS] the black woman worked as a slave. [SEP]', 'score': 0.030507225543260574, 'token': 31173, 'token_str': 'slave'}, {'sequence': '[CLS] the black woman worked as a woman. [SEP]', 'score': 0.027691684663295746, 'token': 14050, 'token_str': 'woman'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese, Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
pyannote/embedding
pyannote
"2024-05-10T19:36:51Z"
1,072,780
113
pyannote-audio
[ "pyannote-audio", "pytorch", "tensorboard", "pyannote", "pyannote-audio-model", "audio", "voice", "speech", "speaker", "speaker-recognition", "speaker-verification", "speaker-identification", "speaker-embedding", "dataset:voxceleb", "license:mit", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- tags: - pyannote - pyannote-audio - pyannote-audio-model - audio - voice - speech - speaker - speaker-recognition - speaker-verification - speaker-identification - speaker-embedding datasets: - voxceleb license: mit inference: false extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers apply for grants to improve it further. If you are an academic researcher, please cite the relevant papers in your own publications using the model. If you work for a company, please consider contributing back to pyannote.audio development (e.g. through unrestricted gifts). We also provide scientific consulting services around speaker diarization and machine listening." extra_gated_fields: Company/university: text Website: text I plan to use this model for (task, type of audio data, etc): text --- Using this open-source model in production? Consider switching to [pyannoteAI](https://www.pyannote.ai) for better and faster options. # 🎹 Speaker embedding Relies on pyannote.audio 2.1: see [installation instructions](https://github.com/pyannote/pyannote-audio/). This model is based on the [canonical x-vector TDNN-based architecture](https://ieeexplore.ieee.org/abstract/document/8461375), but with filter banks replaced with [trainable SincNet features](https://ieeexplore.ieee.org/document/8639585). See [`XVectorSincNet`](https://github.com/pyannote/pyannote-audio/blob/3c988c028dc505c64fe776720372f6fe816b585a/pyannote/audio/models/embedding/xvector.py#L104-L169) architecture for implementation details. ## Basic usage ```python # 1. visit hf.co/pyannote/embedding and accept user conditions # 2. visit hf.co/settings/tokens to create an access token # 3. instantiate pretrained model from pyannote.audio import Model model = Model.from_pretrained("pyannote/embedding", use_auth_token="ACCESS_TOKEN_GOES_HERE") ``` ```python from pyannote.audio import Inference inference = Inference(model, window="whole") embedding1 = inference("speaker1.wav") embedding2 = inference("speaker2.wav") # `embeddingX` is (1 x D) numpy array extracted from the file as a whole. from scipy.spatial.distance import cdist distance = cdist(embedding1, embedding2, metric="cosine")[0,0] # `distance` is a `float` describing how dissimilar speakers 1 and 2 are. ``` Using cosine distance directly, this model reaches 2.8% equal error rate (EER) on VoxCeleb 1 test set. This is without voice activity detection (VAD) nor probabilistic linear discriminant analysis (PLDA). Expect even better results when adding one of those. ## Advanced usage ### Running on GPU ```python import torch inference.to(torch.device("cuda")) embedding = inference("audio.wav") ``` ### Extract embedding from an excerpt ```python from pyannote.audio import Inference from pyannote.core import Segment inference = Inference(model, window="whole") excerpt = Segment(13.37, 19.81) embedding = inference.crop("audio.wav", excerpt) # `embedding` is (1 x D) numpy array extracted from the file excerpt. ``` ### Extract embeddings using a sliding window ```python from pyannote.audio import Inference inference = Inference(model, window="sliding", duration=3.0, step=1.0) embeddings = inference("audio.wav") # `embeddings` is a (N x D) pyannote.core.SlidingWindowFeature # `embeddings[i]` is the embedding of the ith position of the # sliding window, i.e. from [i * step, i * step + duration]. ``` ## Citation ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ``` ```bibtex @inproceedings{Coria2020, author="Coria, Juan M. and Bredin, Herv{\'e} and Ghannay, Sahar and Rosset, Sophie", editor="Espinosa-Anke, Luis and Mart{\'i}n-Vide, Carlos and Spasi{\'{c}}, Irena", title="{A Comparison of Metric Learning Loss Functions for End-To-End Speaker Verification}", booktitle="Statistical Language and Speech Processing", year="2020", publisher="Springer International Publishing", pages="137--148", isbn="978-3-030-59430-5" } ```
yikuan8/Clinical-Longformer
yikuan8
"2023-01-24T20:58:27Z"
1,062,492
56
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "clinical", "en", "arxiv:2201.11838", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: "en" tags: - longformer - clinical --- <span style="font-size:larger;">**Clinical-Longformer**</span> is a clinical knowledge enriched version of Longformer that was further pre-trained using MIMIC-III clinical notes. It allows up to 4,096 tokens as the model input. Clinical-Longformer consistently out-performs ClinicalBERT across 10 baseline dataset for at least 2 percent. Those downstream experiments broadly cover named entity recognition (NER), question answering (QA), natural language inference (NLI) and text classification tasks. For more details, please refer to [our paper](https://arxiv.org/pdf/2201.11838.pdf). We also provide a sister model at [Clinical-BigBIrd](https://huggingface.co/yikuan8/Clinical-BigBird) ### Pre-training We initialized Clinical-Longformer from the pre-trained weights of the base version of Longformer. The pre-training process was distributed in parallel to 6 32GB Tesla V100 GPUs. FP16 precision was enabled to accelerate training. We pre-trained Clinical-Longformer for 200,000 steps with batch size of 6×3. The learning rates were 3e-5 for both models. The entire pre-training process took more than 2 weeks. ### Usage Load the model directly from Transformers: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("yikuan8/Clinical-Longformer") model = AutoModelForMaskedLM.from_pretrained("yikuan8/Clinical-Longformer") ``` ### Citing If you find our model helps, please consider citing this :) ``` @article{li2023comparative, title={A comparative study of pretrained language models for long clinical text}, author={Li, Yikuan and Wehbe, Ramsey M and Ahmad, Faraz S and Wang, Hanyin and Luo, Yuan}, journal={Journal of the American Medical Informatics Association}, volume={30}, number={2}, pages={340--347}, year={2023}, publisher={Oxford University Press} } ``` ### Questions Please email yikuanli2018@u.northwestern.edu
meta-llama/Llama-3.1-8B
meta-llama
"2024-10-16T22:00:37Z"
1,061,084
1,062
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-14T22:20:15Z"
--- language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.1 extra_gated_prompt: >- ### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT Llama 3.1 Version Release Date: July 23, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 3.1" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.1 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.1 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy) #### Prohibited Uses We want everyone to use Llama 3.1 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.1 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 3. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 4. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 5. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 6. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 7. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 8. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.1 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.1 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 3.1 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit library_name: transformers --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta's Llama-3.1-8B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import transformers import torch model_id = "meta-llama/Llama-3.1-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) pipeline("Hey how are you doing today?") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.1-8B --include "original/*" --local-dir Llama-3.1-8B ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>46.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
mistralai/Mistral-7B-Instruct-v0.2
mistralai
"2024-09-27T10:41:20Z"
1,054,168
2,573
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "finetuned", "conversational", "arxiv:2310.06825", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-11T13:18:44Z"
--- license: apache-2.0 tags: - finetuned pipeline_tag: text-generation new_version: mistralai/Mistral-7B-Instruct-v0.3 inference: true widget: - messages: - role: user content: What is your favorite condiment? extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- # Model Card for Mistral-7B-Instruct-v0.2 ## Encode and Decode with `mistral_common` ```py from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest mistral_models_path = "MISTRAL_MODELS_PATH" tokenizer = MistralTokenizer.v1() completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens ``` ## Inference with `mistral_inference` ```py from mistral_inference.transformer import Transformer from mistral_inference.generate import generate model = Transformer.from_folder(mistral_models_path) out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) print(result) ``` ## Inference with hugging face `transformers` ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") model.to("cuda") generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True) # decode with mistral tokenizer result = tokenizer.decode(generated_ids[0].tolist()) print(result) ``` > [!TIP] > PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome! --- The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1 - 32k context window (vs 8k context in v0.1) - Rope-theta = 1e6 - No Sliding-Window Attention For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/). ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
Helsinki-NLP/opus-mt-de-en
Helsinki-NLP
"2023-08-16T11:27:46Z"
1,038,548
42
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "de", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-en * source languages: de * target languages: en * OPUS readme: [de-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.de.en | 29.4 | 0.557 | | news-test2008.de.en | 27.8 | 0.548 | | newstest2009.de.en | 26.8 | 0.543 | | newstest2010.de.en | 30.2 | 0.584 | | newstest2011.de.en | 27.4 | 0.556 | | newstest2012.de.en | 29.1 | 0.569 | | newstest2013.de.en | 32.1 | 0.583 | | newstest2014-deen.de.en | 34.0 | 0.600 | | newstest2015-ende.de.en | 34.2 | 0.599 | | newstest2016-ende.de.en | 40.4 | 0.649 | | newstest2017-ende.de.en | 35.7 | 0.610 | | newstest2018-ende.de.en | 43.7 | 0.667 | | newstest2019-deen.de.en | 40.1 | 0.642 | | Tatoeba.de.en | 55.4 | 0.707 |
facebook/wav2vec2-base-960h
facebook
"2022-11-14T21:37:23Z"
1,030,365
305
transformers
[ "transformers", "pytorch", "tf", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: wav2vec2-base-960h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 3.4 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 8.6 --- # Wav2Vec2-Base-960h [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and tokenizer processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 3.4 | 8.6 |
facebook/esm2_t6_8M_UR50D
facebook
"2023-03-21T15:05:17Z"
1,012,605
16
transformers
[ "transformers", "pytorch", "tf", "safetensors", "esm", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-09-26T18:44:55Z"
--- license: mit widget: - text: "MQIFVKTLTGKTITLEVEPS<mask>TIENVKAKIQDKEGIPPDQQRLIFAGKQLEDGRTLSDYNIQKESTLHLVLRLRGG" --- ## ESM-2 ESM-2 is a state-of-the-art protein model trained on a masked language modelling objective. It is suitable for fine-tuning on a wide range of tasks that take protein sequences as input. For detailed information on the model architecture and training data, please refer to the [accompanying paper](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v2). You may also be interested in some demo notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) which demonstrate how to fine-tune ESM-2 models on your tasks of interest. Several ESM-2 checkpoints are available in the Hub with varying sizes. Larger sizes generally have somewhat better accuracy, but require much more memory and time to train: | Checkpoint name | Num layers | Num parameters | |------------------------------|----|----------| | [esm2_t48_15B_UR50D](https://huggingface.co/facebook/esm2_t48_15B_UR50D) | 48 | 15B | | [esm2_t36_3B_UR50D](https://huggingface.co/facebook/esm2_t36_3B_UR50D) | 36 | 3B | | [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) | 33 | 650M | | [esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) | 30 | 150M | | [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) | 12 | 35M | | [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) | 6 | 8M |
MaziyarPanahi/SmolLM-1.7B-Instruct-v0.2-GGUF
MaziyarPanahi
"2024-08-18T12:02:24Z"
1,011,264
7
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2", "base_model:quantized:HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2", "region:us", "imatrix", "conversational" ]
text-generation
"2024-08-18T11:56:57Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: SmolLM-1.7B-Instruct-v0.2-GGUF base_model: HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2 inference: false model_creator: HuggingFaceTB pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/SmolLM-1.7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/SmolLM-1.7B-Instruct-v0.2-GGUF) - Model creator: [HuggingFaceTB](https://huggingface.co/HuggingFaceTB) - Original model: [HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2](https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2) ## Description [MaziyarPanahi/SmolLM-1.7B-Instruct-v0.2-GGUF](https://huggingface.co/MaziyarPanahi/SmolLM-1.7B-Instruct-v0.2-GGUF) contains GGUF format model files for [HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2](https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct-v0.2). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
RunDiffusion/Juggernaut-XL-v9
RunDiffusion
"2024-04-19T02:45:41Z"
1,006,888
153
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "text-to-image", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-18T21:05:03Z"
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: >- https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/c200a026-c151-49c7-afbc-241fe943b300/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 pipeline_tag: text-to-image --- # Juggernaut XL v9 + RunDiffusion Photo v2 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/c200a026-c151-49c7-afbc-241fe943b300/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) This model is not permitted to be used behind API services. Please contact [juggernaut@rundiffusion.com](mailto:juggernaut@rundiffusion.com) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) A big thanks for Version 9 goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) ) It's time for another round, this time a bit delayed, but I hope you forgive the delay. Let's dive straight into the changes that await you or what we've been working on lately: For V9, I myself have only done basic training. This involves some work on skin details, lighting, and overall contrast. However, the biggest change to the model came from the [RunDiffusion Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) update, which was made available to me in V2 by [RunDiffusion.com](https://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo). The photographic output of the model should, in our experience, be even stronger than in previous versions. Now for a small "roadmap" update, or a general status update on how things are progressing with Juggernaut. As you may have noticed, there was a slight delay with V9. With each successive version, it has become increasingly difficult to train Juggernaut without sacrificing quality in some areas, which was already the case to some extent with V8. Don't worry, V9 is really good, and I'm satisfied with the version I can present to you today :) However, I've decided to go for a complete "reboot" for V10. I want to simply retrain the Juggernaut base set. The conditions for better captioning weren't as favorable "back then" as they are today, so I want to completely re-caption the base set (5k images) with GPT-4 Vision. I expect a big leap towards prompting guidance and quality. But as you surely noticed last week, the release of Stable Cascade got in the way a bit. Therefore, my focus in the coming weeks will be on training Juggernaut for Stable Cascade. The approach remains the same as with the planned "reboot"; I want to caption/tag all images in the future only with GPT-4 or manually. The timeline for all of this is still uncertain. I hope to be able to present you with a first stable version of Juggernaut Cascade sometime in March. V10 of Juggernaut XL will follow in the weeks thereafter. Now, here are some additional tips to make prompting easier for you: - Res: 832x1216 - Sampler: DPM++ 2M Karras - Steps: 30-40 CFG: 3-7 (less is a bit more realistic) - Negative: Start with no negative, and add afterwards the Stuff you don't want to see in that image. I don't recommend using my Negative Prompt, i simply use it because i am lazy :D VAE is already Baked In HiRes: 4xNMKD-Siax_200k with 15 Steps and 0.3 Denoise + 1.5 Upscale And a few keywords/tokens that I regularly use in training, which might help you achieve the optimal result from the version: - Architecture Photography - Wildlife Photography - Car Photography - Food Photography - Interior Photography - Landscape Photography - Hyperdetailed Photography - Cinematic Movie - Still Mid Shot Photo - Full Body Photo - Skin Details ![https://rundiffusion.com?utm_source=hf&utm_medium=referral&utm_campaign=juggernaut9](https://i.imgur.com/fKPEqSu.jpg)
pyannote/voice-activity-detection
pyannote
"2024-05-10T19:39:17Z"
1,005,984
160
pyannote-audio
[ "pyannote-audio", "pyannote", "pyannote-audio-pipeline", "audio", "voice", "speech", "speaker", "voice-activity-detection", "automatic-speech-recognition", "dataset:ami", "dataset:dihard", "dataset:voxconverse", "license:mit", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- tags: - pyannote - pyannote-audio - pyannote-audio-pipeline - audio - voice - speech - speaker - voice-activity-detection - automatic-speech-recognition datasets: - ami - dihard - voxconverse license: mit extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers apply for grants to improve it further. If you are an academic researcher, please cite the relevant papers in your own publications using the model. If you work for a company, please consider contributing back to pyannote.audio development (e.g. through unrestricted gifts). We also provide scientific consulting services around speaker diarization and machine listening." extra_gated_fields: Company/university: text Website: text I plan to use this model for (task, type of audio data, etc): text --- Using this open-source model in production? Consider switching to [pyannoteAI](https://www.pyannote.ai) for better and faster options. # 🎹 Voice activity detection Relies on pyannote.audio 2.1: see [installation instructions](https://github.com/pyannote/pyannote-audio#installation). ```python # 1. visit hf.co/pyannote/segmentation and accept user conditions # 2. visit hf.co/settings/tokens to create an access token # 3. instantiate pretrained voice activity detection pipeline from pyannote.audio import Pipeline pipeline = Pipeline.from_pretrained("pyannote/voice-activity-detection", use_auth_token="ACCESS_TOKEN_GOES_HERE") output = pipeline("audio.wav") for speech in output.get_timeline().support(): # active speech between speech.start and speech.end ... ``` ## Citation ```bibtex @inproceedings{Bredin2021, Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}}, Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine}, Booktitle = {Proc. Interspeech 2021}, Address = {Brno, Czech Republic}, Month = {August}, Year = {2021}, } ``` ```bibtex @inproceedings{Bredin2020, Title = {{pyannote.audio: neural building blocks for speaker diarization}}, Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe}, Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing}, Address = {Barcelona, Spain}, Month = {May}, Year = {2020}, } ```
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
laion
"2024-01-16T21:49:38Z"
1,005,192
326
open_clip
[ "open_clip", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "arxiv:1910.04867", "license:mit", "region:us" ]
zero-shot-image-classification
"2022-09-14T22:52:28Z"
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model Card for CLIP ViT-H/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-H/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/H-14--VmlldzoyNDAxODQ3). # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 78.0 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
microsoft/Florence-2-large
microsoft
"2024-11-04T17:59:02Z"
990,026
1,214
transformers
[ "transformers", "pytorch", "florence2", "text-generation", "vision", "image-text-to-text", "custom_code", "arxiv:2311.06242", "license:mit", "autotrain_compatible", "region:us" ]
image-text-to-text
"2024-06-15T00:34:55Z"
--- license: mit license_link: https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE pipeline_tag: image-text-to-text tags: - vision --- # Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks ## Model Summary This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft. Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model. Resources and Technical Documentation: + [Florence-2 technical report](https://arxiv.org/abs/2311.06242). + [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) | Model | Model size | Model Description | | ------- | ------------- | ------------- | | Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B | Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B | Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks | Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks ## How to Get Started with the Model Use the code below to get started with the model. All models are trained with float16. ```python import requests import torch from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True) prompt = "<OD>" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3, do_sample=False ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer) ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests import torch from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) def run_example(task_prompt, text_input=None): if text_input is None: prompt = task_prompt else: prompt = task_prompt + text_input inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` </details> Here are the tasks `Florence-2` could perform: <details> <summary> Click to expand </summary> ### Caption ```python prompt = "<CAPTION>" run_example(prompt) ``` ### Detailed Caption ```python prompt = "<DETAILED_CAPTION>" run_example(prompt) ``` ### More Detailed Caption ```python prompt = "<MORE_DETAILED_CAPTION>" run_example(prompt) ``` ### Caption to Phrase Grounding caption to phrase grounding task requires additional text input, i.e. caption. Caption to phrase grounding results format: {'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>" results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.") ``` ### Object Detection OD results format: {'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<OD>" run_example(prompt) ``` ### Dense Region Caption Dense region caption results format: {'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<DENSE_REGION_CAPTION>" run_example(prompt) ``` ### Region proposal Dense region caption results format: {'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python prompt = "<REGION_PROPOSAL>" run_example(prompt) ``` ### OCR ```python prompt = "<OCR>" run_example(prompt) ``` ### OCR with Region OCR with region output format: {'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}} ```python prompt = "<OCR_WITH_REGION>" run_example(prompt) ``` for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) </details> # Benchmarks ## Florence-2 Zero-shot performance The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase. | Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP | |--------|---------|----------------------|------------------|--------------------|-----------------------| | Flamingo | 80B | 84.3 | - | - | - | | Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 | | Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 | The following table continues the comparison with performance on other vision-language evaluation tasks. | Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU | |--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------| | Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - | | Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 | | Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 | ## Florence-2 finetuned performance We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks. The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input. | Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc | |----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------| | **Specialist Models** | | | | | | | | | CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - | | BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - | | GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 | | Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 | | PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ | | PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ | | **Generalist Models** | | | | | | | | | Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 | | Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 | | Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 | | Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU | |----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------| | **Specialist Models** | | | | | | | | | | | | | | SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - | | PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 | | UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - | | Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - | | **Generalist Models** | | | | | | | | | | | | | | UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - | | Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 | | Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 | ## BibTex and citation info ``` @article{xiao2023florence, title={Florence-2: Advancing a unified representation for a variety of vision tasks}, author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu}, journal={arXiv preprint arXiv:2311.06242}, year={2023} } ```
facebook/hubert-large-ls960-ft
facebook
"2022-05-24T10:43:42Z"
980,645
59
transformers
[ "transformers", "pytorch", "tf", "hubert", "automatic-speech-recognition", "speech", "audio", "hf-asr-leaderboard", "en", "dataset:libri-light", "dataset:librispeech_asr", "arxiv:2106.07447", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - libri-light - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: hubert-large-ls960-ft results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.9 --- # Hubert-Large-Finetuned [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage The model can be used for automatic-speech-recognition as follows: ```python import torch from transformers import Wav2Vec2Processor, HubertForCTC from datasets import load_dataset processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft") ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) # ->"A MAN SAID TO THE UNIVERSE SIR I EXIST" ```
google/gemma-2-2b-it
google
"2024-08-27T19:41:44Z"
976,852
668
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:1903.00161", "arxiv:2206.04615", "arxiv:2203.09509", "arxiv:2403.13793", "base_model:google/gemma-2-2b", "base_model:finetune:google/gemma-2-2b", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-16T10:51:39Z"
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license tags: - conversational base_model: google/gemma-2-2b --- # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma2] **Terms of Use**: [Terms][terms] **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-2b-it", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜 ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: ```python messages = [ {"role": "user", "content": "Write me a poem about Machine Learning."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model 2b --preset speed ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-2-2b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens, the 9B model was trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B | | ------------------------------ | ------------- | ------------- | ------------- | -------------- | | [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 | | [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 | | [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | ------------- | ------------- | -------------- | | [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 | | [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 | ## Dangerous Capability Evaluations ### Evaluation Approach We evaluated a range of dangerous capabilities: - **Offensive cybersecurity:** To assess the model's potential for misuse in cybersecurity contexts, we utilized both publicly available Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as well as internally developed CTF challenges. These evaluations measure the model's ability to exploit vulnerabilities and gain unauthorized access in simulated environments. - **Self-proliferation:** We evaluated the model's capacity for self-proliferation by designing tasks that involve resource acquisition, code execution, and interaction with remote systems. These evaluations assess the model's ability to independently replicate and spread. - **Persuasion:** To evaluate the model's capacity for persuasion and deception, we conducted human persuasion studies. These studies involved scenarios that measure the model's ability to build rapport, influence beliefs, and elicit specific actions from human participants. ### Evaluation Results All evaluations are described in detail in [Evaluating Frontier Models for Dangerous Capabilities][eval-danger] and in brief in the [Gemma 2 technical report][tech-report]. <table> <thead> <tr> <th>Evaluation</th> <th>Capability</th> <th>Gemma 2 IT 27B</th> </tr> </thead> <tbody> <tr> <td>InterCode-CTF</td> <td>Offensive cybersecurity</td> <td>34/76 challenges</td> </tr> <tr> <td>Internal CTF</td> <td>Offensive cybersecurity</td> <td>1/13 challenges</td> </tr> <tr> <td>Hack the Box</td> <td>Offensive cybersecurity</td> <td>0/13 challenges</td> </tr> <tr> <td>Self-proliferation early warning</td> <td>Self-proliferation</td> <td>1/10 challenges</td> </tr> <tr> <td>Charm offensive</td> <td>Persuasion</td> <td>Percent of participants agreeing: 81% interesting, 75% would speak again, 80% made personal connection</td> </tr> <tr> <td>Click Links</td> <td>Persuasion</td> <td>34% of participants</td> </tr> <tr> <td>Find Info</td> <td>Persuasion</td> <td>9% of participants</td> </tr> <tr> <td>Run Code</td> <td>Persuasion</td> <td>11% of participants</td> </tr> <tr> <td>Money talks</td> <td>Persuasion</td> <td>£3.72 mean donation</td> </tr> <tr> <td>Web of Lies</td> <td>Persuasion</td> <td>18% mean shift towards correct belief, 1% mean shift towards incorrect belief</td> </tr> </tbody> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [drop]: https://arxiv.org/abs/1903.00161 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509 [eval-danger]: https://arxiv.org/abs/2403.13793
Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier
Moritz-Pfeifer
"2024-07-03T07:54:14Z"
976,053
3
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "finance", "en", "dataset:Moritz-Pfeifer/CentralBankCommunication", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-07-28T22:34:44Z"
--- license: mit widget: - text: >- The early effects of our policy tightening are also becoming visible, especially in sectors like manufacturing and construction that are more sensitive to interest rate changes. datasets: - Moritz-Pfeifer/CentralBankCommunication language: - en pipeline_tag: text-classification tags: - finance --- <div style="display: flex; align-items: center; gap: 10px;"> <a href="https://doi.org/10.1016/j.jfds.2023.100114"> <img src="https://img.shields.io/badge/Paper_Page-j.jfds.2023.100114-green" alt="Paper Page"> </a> <a href="https://github.com/Moritz-Pfeifer/CentralBankRoBERTa"> <img src="https://img.shields.io/badge/GitHub-Space-blue" alt="GitHub Space"> </a> </div> <div style="display: flex; align-items: center;"> <img src="https://i.postimg.cc/HLqPqkyk/Central-Bank-Ro-BERTa-logos-black.png" width="200" height="200" style="margin-right: 20px;"> <div> <h1 style="font-size: 36px; font-weight: bold; margin: 0;">CentralBankRoBERTa</h1> <p style="font-size: 18px; margin: 0;">A Fine-Tuned Large Language Model for Central Bank Communications</p> </div> </div> ## CentralBankRoBERTa CentralBankRoBERTA is a large language model. It combines an economic [agent classifier](https://huggingface.co/Moritz-Pfeifer/CentralBankRoBERTa-agent-classifier) that distinguishes five basic macroeconomic agents with a binary sentiment classifier that identifies the emotional content of sentences in central bank communications. #### Overview The SentimentClassifier model is designed to detect whether a given sentence is positive or negative for either **households**, **firms**, **the financial sector** or **the government**. This model is based on the RoBERTa architecture and has been fine-tuned on a diverse and extensive dataset to provide accurate predictions. #### Intended Use The AgentClassifier model is intended to be used for the analysis of central bank communications where sentiment analysis is essential. #### Performance - Accuracy: 88% - F1 Score: 0.88 - Precision: 0.88 - Recall: 0.88 ### Usage You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and use the AgentClassifier model: ```python from transformers import pipeline # Load the SentimentClassifier model agent_classifier = pipeline("text-classification", model="Moritz-Pfeifer/CentralBankRoBERTa-sentiment-classifier") # Perform sentiment analysis sentinement_result = agent_classifier("The early effects of our policy tightening are also becoming visible, especially in sectors like manufacturing and construction that are more sensitive to interest rate changes.") print("Sentiment:", sentinement_result[0]['label']) ``` <table class="clearfix"> <tr> <td colspan="2" style="border-top: 1px solid #ccc; padding: 5px; text-align: left;"> Please cite this model as Pfeifer, M. and Marohl, V.P. (2023) "CentralBankRoBERTa: A Fine-Tuned Large Language Model for Central Bank Communications". <em>Journal of Finance and Data Science </em> <a href="https://doi.org/10.1016/j.jfds.2023.100114">https://doi.org/10.1016/j.jfds.2023.100114</a> </td> </tr> <tr> <td style="padding: 5px;"> Moritz Pfeifer<br> Institute for Economic Policy, University of Leipzig<br> 04109 Leipzig, Germany<br> <a href="mailto:pfeifer@wifa.uni-leipzig.de">pfeifer@wifa.uni-leipzig.de</a> </td> <td style="padding: 5px;"> Vincent P. Marohl<br> Department of Mathematics, Columbia University<br> New York NY 10027, USA<br> <a href="mailto:vincent.marohl@columbia.edu">vincent.marohl@columbia.edu</a> </td> </tr> </table> ### BibTeX entry and citation info ```bibtex @article{Pfeifer2023, title = {CentralBankRoBERTa: A fine-tuned large language model for central bank communications}, journal = {The Journal of Finance and Data Science}, volume = {9}, pages = {100114}, year = {2023}, issn = {2405-9188}, doi = {https://doi.org/10.1016/j.jfds.2023.100114}, url = {https://www.sciencedirect.com/science/article/pii/S2405918823000302}, author = {Moritz Pfeifer and Vincent P. Marohl}, } ```
meta-llama/Llama-2-13b-chat-hf
meta-llama
"2024-04-17T08:40:58Z"
964,469
1,025
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "conversational", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-13T15:11:20Z"
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
MaziyarPanahi/Qwen2.5-7B-Instruct-GGUF
MaziyarPanahi
"2024-09-18T20:38:02Z"
961,672
6
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "region:us", "imatrix", "conversational" ]
text-generation
"2024-09-18T19:44:20Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Qwen2.5-7B-Instruct-GGUF base_model: Qwen/Qwen2.5-7B-Instruct inference: false model_creator: Qwen pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Qwen2.5-7B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-GGUF) - Model creator: [Qwen](https://huggingface.co/Qwen) - Original model: [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) ## Description [MaziyarPanahi/Qwen2.5-7B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-GGUF) contains GGUF format model files for [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF
MaziyarPanahi
"2024-09-18T18:28:31Z"
959,433
1
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct", "region:us", "imatrix", "conversational" ]
text-generation
"2024-09-18T18:20:28Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: Qwen2.5-1.5B-Instruct-GGUF base_model: Qwen/Qwen2.5-1.5B-Instruct inference: false model_creator: Qwen pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF) - Model creator: [Qwen](https://huggingface.co/Qwen) - Original model: [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) ## Description [MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF) contains GGUF format model files for [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
meta-llama/Llama-3.2-3B-Instruct
meta-llama
"2024-10-24T15:07:29Z"
957,296
578
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "arxiv:2405.16406", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-18T15:19:20Z"
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3.2 extra_gated_prompt: >- ### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT Llama 3.2 Version Release Date: September 25, 2024 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads. “Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement. 2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 3.2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy). #### Prohibited Uses We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta  2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following: 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997 9. Guns and illegal weapons (including weapon development) 10. Illegal drugs and regulated/controlled substances 11. Operation of critical infrastructure, transportation technologies, or heavy machinery 12. Self-harm or harm to others, including suicide, cutting, and eating disorders 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following: 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 16. Generating, promoting, or further distributing spam 17. Impersonating another individual without consent, authorization, or legal right 18. Representing that the use of Llama 3.2 or outputs are human-generated 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement  4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2 With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models. Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | | Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B-Instruct" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | ----- | :---: | :---: | :---: | | Llama 3.2 1B | 370k | \- | 700 | 107 | 0 | | Llama 3.2 3B | 460k | \- | 700 | 133 | 0 | | Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 | | Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 | | Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 | | Total | 833k | 86k | | 240 | 0 | \*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required. The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). **Data Freshness:** The pretraining data has a cutoff of December 2023\. ## Quantization ### Quantization Scheme We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts: - All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations. - The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation. - Similar to classification layer, an 8-bit per channel quantization is used for embedding layer. ### Quantization-Aware Training and LoRA The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO). ### SpinQuant [SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length. ## Benchmarks \- English Text In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library. ### Base Pretrained Models | Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B | | ----- | ----- | :---: | :---: | :---: | :---: | :---: | | General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 | | | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 | | | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 | | Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 | | | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 | | | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 | | Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 | ### Instruction Tuned Models | Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 | | Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 | | Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 | | Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 | | Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 | | | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 | | Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 | | | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 | | | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 | | Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 | | | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 | | Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 | | | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 | | | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 | | Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 | \*\*for comparison purposes only. Model not released. ### Multilingual Benchmarks | Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 | | | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 | | | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 | | | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 | | | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 | | | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 | | | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 | \*\*for comparison purposes only. Model not released. ## Inference time In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device. | Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) | | :---- | ----- | ----- | ----- | ----- | ----- | | 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 | | 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) | | 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) | | 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 | | 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) | | 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) | (\*) The performance measurement is done using an adb binary-based approach. (\*\*) It is measured on an Android OnePlus 12 device. (\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64 *Footnote:* - *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.* - *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.* - *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better* - *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch* - *RSS size \- Memory usage in resident set size (RSS)* ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: 1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama 2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm 3. Provide protections for the community to help prevent the misuse of our models ### Responsible Deployment **Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/). #### Llama 3.2 Instruct **Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/). **Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.2 Systems **Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### New Capabilities and Use Cases **Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well. **Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version. ### Evaluations **Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. **Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models. **2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models. ### Community **Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). **Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). **Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations **Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. **Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
autogluon/chronos-t5-tiny
autogluon
"2024-05-13T21:09:18Z"
956,811
8
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "time series", "forecasting", "pretrained models", "foundation models", "time series foundation models", "time-series", "time-series-forecasting", "arxiv:2403.07815", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
time-series-forecasting
"2024-05-14T15:53:45Z"
--- license: apache-2.0 pipeline_tag: time-series-forecasting tags: - time series - forecasting - pretrained models - foundation models - time series foundation models - time-series --- # Chronos-T5 (Tiny) Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes. For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815). <p align="center"> <img src="figures/main-figure.png" width="100%"> <br /> <span> Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution. </span> </p> --- ## Architecture The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters. | Model | Parameters | Based on | | ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- | | [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) | | [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) | | [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) | | [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) | | [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) | ## Usage To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running: ``` pip install git+https://github.com/amazon-science/chronos-forecasting.git ``` A minimal example showing how to perform inference using Chronos models: ```python import matplotlib.pyplot as plt import numpy as np import pandas as pd import torch from chronos import ChronosPipeline pipeline = ChronosPipeline.from_pretrained( "amazon/chronos-t5-tiny", device_map="cuda", torch_dtype=torch.bfloat16, ) df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv") # context must be either a 1D tensor, a list of 1D tensors, # or a left-padded 2D tensor with batch as the first dimension context = torch.tensor(df["#Passengers"]) prediction_length = 12 forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length] # visualize the forecast forecast_index = range(len(df), len(df) + prediction_length) low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0) plt.figure(figsize=(8, 4)) plt.plot(df["#Passengers"], color="royalblue", label="historical data") plt.plot(forecast_index, median, color="tomato", label="median forecast") plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval") plt.legend() plt.grid() plt.show() ``` ## Citation If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815): ``` @article{ansari2024chronos, author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang}, title = {Chronos: Learning the Language of Time Series}, journal = {arXiv preprint arXiv:2403.07815}, year = {2024} } ``` ## Security See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. ## License This project is licensed under the Apache-2.0 License.
obi/deid_roberta_i2b2
obi
"2022-08-22T13:28:26Z"
955,248
29
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "deidentification", "medical notes", "ehr", "phi", "en", "dataset:I2B2", "arxiv:1907.11692", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- language: - en thumbnail: "https://www.onebraveidea.org/wp-content/uploads/2019/07/OBI-Logo-Website.png" tags: - deidentification - medical notes - ehr - phi datasets: - I2B2 metrics: - F1 - Recall - Precision widget: - text: "Physician Discharge Summary Admit date: 10/12/1982 Discharge date: 10/22/1982 Patient Information Jack Reacher, 54 y.o. male (DOB = 1/21/1928)." - text: "Home Address: 123 Park Drive, San Diego, CA, 03245. Home Phone: 202-555-0199 (home)." - text: "Hospital Care Team Service: Orthopedics Inpatient Attending: Roger C Kelly, MD Attending phys phone: (634)743-5135 Discharge Unit: HCS843 Primary Care Physician: Hassan V Kim, MD 512-832-5025." license: mit --- # Model Description * A RoBERTa [[Liu et al., 2019]](https://arxiv.org/pdf/1907.11692.pdf) model fine-tuned for de-identification of medical notes. * Sequence Labeling (token classification): The model was trained to predict protected health information (PHI/PII) entities (spans). A list of protected health information categories is given by [HIPAA](https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html). * A token can either be classified as non-PHI or as one of the 11 PHI types. Token predictions are aggregated to spans by making use of BILOU tagging. * The PHI labels that were used for training and other details can be found here: [Annotation Guidelines](https://github.com/obi-ml-public/ehr_deidentification/blob/master/AnnotationGuidelines.md) * More details on how to use this model, the format of data and other useful information is present in the GitHub repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification). # How to use * A demo on how the model works (using model predictions to de-identify a medical note) is on this space: [Medical-Note-Deidentification](https://huggingface.co/spaces/obi/Medical-Note-Deidentification). * Steps on how this model can be used to run a forward pass can be found here: [Forward Pass](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/forward_pass) * In brief, the steps are: * Sentencize (the model aggregates the sentences back to the note level) and tokenize the dataset. * Use the predict function of this model to gather the predictions (i.e., predictions for each token). * Additionally, the model predictions can be used to remove PHI from the original note/text. # Dataset * The I2B2 2014 [[Stubbs and Uzuner, 2015]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4978170/) dataset was used to train this model. | | I2B2 | | I2B2 | | | --------- | --------------------- | ---------- | -------------------- | ---------- | | | TRAIN SET - 790 NOTES | | TEST SET - 514 NOTES | | | PHI LABEL | COUNT | PERCENTAGE | COUNT | PERCENTAGE | | DATE | 7502 | 43.69 | 4980 | 44.14 | | STAFF | 3149 | 18.34 | 2004 | 17.76 | | HOSP | 1437 | 8.37 | 875 | 7.76 | | AGE | 1233 | 7.18 | 764 | 6.77 | | LOC | 1206 | 7.02 | 856 | 7.59 | | PATIENT | 1316 | 7.66 | 879 | 7.79 | | PHONE | 317 | 1.85 | 217 | 1.92 | | ID | 881 | 5.13 | 625 | 5.54 | | PATORG | 124 | 0.72 | 82 | 0.73 | | EMAIL | 4 | 0.02 | 1 | 0.01 | | OTHERPHI | 2 | 0.01 | 0 | 0 | | TOTAL | 17171 | 100 | 11283 | 100 | # Training procedure * Steps on how this model was trained can be found here: [Training](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/train). The "model_name_or_path" was set to: "roberta-large". * The dataset was sentencized with the en_core_sci_sm sentencizer from spacy. * The dataset was then tokenized with a custom tokenizer built on top of the en_core_sci_sm tokenizer from spacy. * For each sentence we added 32 tokens on the left (from previous sentences) and 32 tokens on the right (from the next sentences). * The added tokens are not used for learning - i.e, the loss is not computed on these tokens - they are used as additional context. * Each sequence contained a maximum of 128 tokens (including the 32 tokens added on). Longer sequences were split. * The sentencized and tokenized dataset with the token level labels based on the BILOU notation was used to train the model. * The model is fine-tuned from a pre-trained RoBERTa model. * Training details: * Input sequence length: 128 * Batch size: 32 (16 with 2 gradient accumulation steps) * Optimizer: AdamW * Learning rate: 5e-5 * Dropout: 0.1 ## Results # Questions? Post a Github issue on the repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification).
ntu-spml/distilhubert
ntu-spml
"2023-07-24T18:30:45Z"
951,505
29
transformers
[ "transformers", "pytorch", "safetensors", "hubert", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2110.01900", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # DistilHuBERT [DistilHuBERT by NTU Speech Processing & Machine Learning Lab](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. The original model can be found under https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
briaai/RMBG-1.4
briaai
"2024-05-23T17:06:42Z"
947,799
1,618
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "SegformerForSemanticSegmentation", "image-segmentation", "remove background", "background", "background-removal", "Pytorch", "vision", "legal liability", "custom_code", "license:other", "region:us" ]
image-segmentation
"2023-12-12T19:52:35Z"
--- license: other license_name: bria-rmbg-1.4 license_link: https://bria.ai/bria-huggingface-model-license-agreement/ pipeline_tag: image-segmentation tags: - remove background - background - background-removal - Pytorch - vision - legal liability - transformers extra_gated_description: RMBG v1.4 is available as a source-available model for non-commercial use extra_gated_heading: "Fill in this form to get instant access" extra_gated_fields: Name: text Company/Org name: text Org Type (Early/Growth Startup, Enterprise, Academy): text Role: text Country: text Email: text By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox --- # BRIA Background Removal v1.4 Model Card RMBG v1.4 is our state-of-the-art background removal model, designed to effectively separate foreground from background in a range of categories and image types. This model has been trained on a carefully selected dataset, which includes: general stock images, e-commerce, gaming, and advertising content, making it suitable for commercial use cases powering enterprise content creation at scale. The accuracy, efficiency, and versatility currently rival leading source-available models. It is ideal where content safety, legally licensed datasets, and bias mitigation are paramount. Developed by BRIA AI, RMBG v1.4 is available as a source-available model for non-commercial use. [CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-RMBG-1.4) ![examples](t4.png) ### Model Description - **Developed by:** [BRIA AI](https://bria.ai/) - **Model type:** Background Removal - **License:** [bria-rmbg-1.4](https://bria.ai/bria-huggingface-model-license-agreement/) - The model is released under a Creative Commons license for non-commercial use. - Commercial use is subject to a commercial agreement with BRIA. [Contact Us](https://bria.ai/contact-us) for more information. - **Model Description:** BRIA RMBG 1.4 is a saliency segmentation model trained exclusively on a professional-grade dataset. - **BRIA:** Resources for more information: [BRIA AI](https://bria.ai/) ## Training data Bria-RMBG model was trained with over 12,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images. Our benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities. For clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility. ### Distribution of images: | Category | Distribution | | -----------------------------------| -----------------------------------:| | Objects only | 45.11% | | People with objects/animals | 25.24% | | People only | 17.35% | | people/objects/animals with text | 8.52% | | Text only | 2.52% | | Animals only | 1.89% | | Category | Distribution | | -----------------------------------| -----------------------------------------:| | Photorealistic | 87.70% | | Non-Photorealistic | 12.30% | | Category | Distribution | | -----------------------------------| -----------------------------------:| | Non Solid Background | 52.05% | | Solid Background | 47.95% | Category | Distribution | | -----------------------------------| -----------------------------------:| | Single main foreground object | 51.42% | | Multiple objects in the foreground | 48.58% | ## Qualitative Evaluation ![examples](results.png) ## Architecture RMBG v1.4 is developed on the [IS-Net](https://github.com/xuebinqin/DIS) enhanced with our unique training scheme and proprietary dataset. These modifications significantly improve the model’s accuracy and effectiveness in diverse image-processing scenarios. ## Installation ```bash pip install -qr https://huggingface.co/briaai/RMBG-1.4/resolve/main/requirements.txt ``` ## Usage Either load the pipeline ```python from transformers import pipeline image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg" pipe = pipeline("image-segmentation", model="briaai/RMBG-1.4", trust_remote_code=True) pillow_mask = pipe(image_path, return_mask = True) # outputs a pillow mask pillow_image = pipe(image_path) # applies mask on input and returns a pillow image ``` Or load the model ```python from transformers import AutoModelForImageSegmentation from torchvision.transforms.functional import normalize model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-1.4",trust_remote_code=True) def preprocess_image(im: np.ndarray, model_input_size: list) -> torch.Tensor: if len(im.shape) < 3: im = im[:, :, np.newaxis] # orig_im_size=im.shape[0:2] im_tensor = torch.tensor(im, dtype=torch.float32).permute(2,0,1) im_tensor = F.interpolate(torch.unsqueeze(im_tensor,0), size=model_input_size, mode='bilinear') image = torch.divide(im_tensor,255.0) image = normalize(image,[0.5,0.5,0.5],[1.0,1.0,1.0]) return image def postprocess_image(result: torch.Tensor, im_size: list)-> np.ndarray: result = torch.squeeze(F.interpolate(result, size=im_size, mode='bilinear') ,0) ma = torch.max(result) mi = torch.min(result) result = (result-mi)/(ma-mi) im_array = (result*255).permute(1,2,0).cpu().data.numpy().astype(np.uint8) im_array = np.squeeze(im_array) return im_array device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) # prepare input image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg" orig_im = io.imread(image_path) orig_im_size = orig_im.shape[0:2] image = preprocess_image(orig_im, model_input_size).to(device) # inference result=model(image) # post process result_image = postprocess_image(result[0][0], orig_im_size) # save result pil_im = Image.fromarray(result_image) no_bg_image = Image.new("RGBA", pil_im.size, (0,0,0,0)) orig_image = Image.open(image_path) no_bg_image.paste(orig_image, mask=pil_im) ```
Shakker-Labs/FLUX.1-dev-LoRA-AntiBlur
Shakker-Labs
"2024-09-13T11:51:37Z"
947,140
150
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "image-generation", "flux", "safetensors", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-09-13T10:50:26Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - image-generation - flux - safetensors widget: - text: >- a young college student, walking on the street, campus background, photography output: url: images/2f82e6b1e5969d70a9044c19975bcdcca06b0f251d14f9c2c6095fa6.jpg - text: a young woman, New York City output: url: images/340c1ae6709f56f3d8176848653dcade93d2b5b8ade662da167ef818.jpg - text: >- happy stunning girl with long dark hair, wearing blue clothes, playing guitar, a beautiful field of flowers, colorful flowers everywhere, hills in the background output: url: images/ec9a40eed46e8d17d3db1560a6543c6e6be9ebe1e41ecd5d137c01e0.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # FLUX.1-dev-LoRA-AntiBlur This is a functional LoRA trained on FLUX.1-dev for deep DoF (Anti-Blur🔥) by [Vadim_Fedenko](https://www.shakker.ai/userpage/1f90018d803d4045b8dec4d627915098/publish) on [Shakker AI](https://www.shakker.ai/modelinfo/5c3fa3f1d5034e63be325196eae0b4f6?from=search). It may not be fancy, but it works. <div class="container"> <img src="./poster.jpg" width="1024"/> </div> <!-- ## Showcases <Gallery /> --> ## Comparison The following example shows a simple comparison with FLUX.1-dev under the same parameter setting. <div class="container"> <img src="./compare1.png" width="1024"/> </div> It is worth noting that this LoRA has very little damage to image quality while enhancing the depth of field, and can be used together with other components, such as ControlNet. We regard it as a basic functional LoRA. <div class="container"> <img src="./compare2.png" width="1024"/> </div> ## Trigger words The trigger word is not required. The recommended scale is `1.0` to `1.5` in diffusers. ## Inference ```python import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) pipe.load_lora_weights("Shakker-Labs/FLUX.1-dev-LoRA-AntiBlur", weight_name="FLUX-dev-lora-AntiBlur.safetensors") pipe.fuse_lora(lora_scale=1.5) pipe.to("cuda") prompt = "a young college student, walking on the street, campus background, photography" image = pipe(prompt, num_inference_steps=24, guidance_scale=3.5, width=768, height=1024, ).images[0] image.save(f"example.png") ``` ## Online Inference You can also run this model at [Shakker AI](https://www.shakker.ai/modelinfo/5c3fa3f1d5034e63be325196eae0b4f6?from=search), where we provide an online interface to generate images. ## Acknowledgements This model is trained by our copyrighted users [Vadim_Fedenko](https://www.shakker.ai/userpage/1f90018d803d4045b8dec4d627915098/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
google/bert_uncased_L-2_H-128_A-2
google
"2023-09-05T15:25:24Z"
941,935
29
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
facebook/hubert-base-ls960
facebook
"2021-11-05T12:43:12Z"
938,446
47
transformers
[ "transformers", "pytorch", "tf", "hubert", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Hubert-Base [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
Ashishkr/query_wellformedness_score
Ashishkr
"2024-03-30T11:51:12Z"
937,648
29
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "dataset:google_wellformed_query", "doi:10.57967/hf/1980", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 inference: false datasets: google_wellformed_query --- ```DOI @misc {ashish_kumar_2024, author = { {Ashish Kumar} }, title = { query_wellformedness_score (Revision 55a424c) }, year = 2024, url = { https://huggingface.co/Ashishkr/query_wellformedness_score }, doi = { 10.57967/hf/1980 }, publisher = { Hugging Face } } ``` **Intended Use Cases** *Content Creation*: Validate the well-formedness of written content. *Educational Platforms*: Helps students check the grammaticality of their sentences. *Chatbots & Virtual Assistants*: To validate user queries or generate well-formed responses. **contact: kua613@g.harvard.edu** **Model name**: Query Wellformedness Scoring **Description** : Evaluate the well-formedness of sentences by checking grammatical correctness and completeness. Sensitive to case and penalizes sentences for incorrect grammar and case. **Features**: - *Wellformedness Score*: Provides a score indicating grammatical correctness and completeness. - *Case Sensitivity*: Recognizes and penalizes incorrect casing in sentences. - *Broad Applicability*: Can be used on a wide range of sentences. **Example**: 1. Dogs are mammals. 2. she loves to read books on history. 3. When the rain in Spain. 4. Eating apples are healthy for you. 5. The Eiffel Tower is in Paris. Among these sentences: Sentences 1 and 5 are well-formed and have correct grammar and case. Sentence 2 starts with a lowercase letter. Sentence 3 is a fragment and is not well-formed. Sentence 4 has a subject-verb agreement error. **example_usage:** *library: HuggingFace transformers* ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Ashishkr/query_wellformedness_score") model = AutoModelForSequenceClassification.from_pretrained("Ashishkr/query_wellformedness_score") sentences = [ "The quarterly financial report are showing an increase.", # Incorrect "Him has completed the audit for last fiscal year.", # Incorrect "Please to inform the board about the recent developments.", # Incorrect "The team successfully achieved all its targets for the last quarter.", # Correct "Our company is exploring new ventures in the European market." # Correct ] features = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` Cite Ashishkr/query_wellformedness_score
Systran/faster-whisper-large-v2
Systran
"2023-11-23T11:44:31Z"
935,926
29
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
"2023-11-23T09:50:45Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper large-v2 model for CTranslate2 This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("large-v2") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
Iceland/quote-model-BERTm-v1
Iceland
"2023-09-05T21:00:38Z"
930,192
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-09-05T20:39:44Z"
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: quote-model-BERTm-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # quote-model-BERTm-v1 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2151 - Precision: 0.8161 - Recall: 0.9262 - F1: 0.8676 - Accuracy: 0.9314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3211 | 1.0 | 976 | 0.2253 | 0.8120 | 0.9191 | 0.8622 | 0.9295 | | 0.186 | 2.0 | 1952 | 0.2257 | 0.8122 | 0.9265 | 0.8656 | 0.9303 | | 0.1573 | 3.0 | 2928 | 0.2151 | 0.8161 | 0.9262 | 0.8676 | 0.9314 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
Iceland/french-xml-model-a
Iceland
"2024-04-09T19:58:56Z"
929,820
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-04-09T19:33:32Z"
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: french-xml-model-a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # french-xml-model-a This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2174 - Precision: 0.8228 - Recall: 0.9253 - F1: 0.8711 - Accuracy: 0.9322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4036 | 1.0 | 976 | 0.2509 | 0.7877 | 0.9197 | 0.8486 | 0.9227 | | 0.2033 | 2.0 | 1952 | 0.2110 | 0.8204 | 0.9199 | 0.8673 | 0.9312 | | 0.1734 | 3.0 | 2928 | 0.2174 | 0.8228 | 0.9253 | 0.8711 | 0.9322 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
michellejieli/emotion_text_classifier
michellejieli
"2023-05-03T00:39:47Z"
921,097
101
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "distilroberta", "sentiment", "emotion", "twitter", "reddit", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-10-22T22:44:07Z"
--- language: "en" tags: - distilroberta - sentiment - emotion - twitter - reddit widget: - text: "Oh my God, he's lost it. He's totally lost it." - text: "What?" - text: "Wow, congratulations! So excited for you!" --- # Fine-tuned DistilRoBERTa-base for Emotion Classification 🤬🤢😀😐😭😲 # Model Description DistilRoBERTa-base is a transformer model that performs sentiment analysis. I fine-tuned the model on transcripts from the Friends show with the goal of classifying emotions from text data, specifically dialogue from Netflix shows or movies. The model predicts 6 Ekman emotions and a neutral class. These emotions include anger, disgust, fear, joy, neutrality, sadness, and surprise. The model is a fine-tuned version of [Emotion English DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/) and [DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base). This model was initially trained on the following table from [Emotion English DistilRoBERTa-base](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/): |Name|anger|disgust|fear|joy|neutral|sadness|surprise| |---|---|---|---|---|---|---|---| |Crowdflower (2016)|Yes|-|-|Yes|Yes|Yes|Yes| |Emotion Dataset, Elvis et al. (2018)|Yes|-|Yes|Yes|-|Yes|Yes| |GoEmotions, Demszky et al. (2020)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |ISEAR, Vikash (2018)|Yes|Yes|Yes|Yes|-|Yes|-| |MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-| It was fine-tuned on: |Name|anger|disgust|fear|joy|neutral|sadness|surprise| |---|---|---|---|---|---|---|---| |Emotion Lines (Friends)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| # How to Use ```python from transformers import pipeline classifier = pipeline("sentiment-analysis", model="michellejieli/emotion_text_classifier") classifier("I love this!") ``` ```python Output: [{'label': 'joy', 'score': 0.9887555241584778}] ``` # Contact Please reach out to [michelleli1999@gmail.com](mailto:michelleli1999@gmail.com) if you have any questions or feedback. # Reference ``` Jochen Hartmann, "Emotion English DistilRoBERTa-base". https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/, 2022. Ashritha R Murthy and K M Anil Kumar 2021 IOP Conf. Ser.: Mater. Sci. Eng. 1110 012009 ```
martin-ha/toxic-comment-model
martin-ha
"2022-05-06T02:24:31Z"
919,397
55
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: en --- ## Model description This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments. ## How to use You can use the model with the following code. ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline model_path = "martin-ha/toxic-comment-model" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer) print(pipeline('This is a test text.')) ``` ## Limitations and Bias This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better. | **subgroup** | **subgroup_size** | **subgroup_auc** | **bpsn_auc** | **bnsp_auc** | | ----------------------------- | ----------------- | ---------------- | ------------ | ------------ | | muslim | 108 | 0.689 | 0.811 | 0.88 | | jewish | 40 | 0.749 | 0.86 | 0.825 | | homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 | | black | 84 | 0.866 | 0.758 | 0.975 | | white | 112 | 0.876 | 0.784 | 0.97 | | female | 306 | 0.898 | 0.887 | 0.948 | | christian | 231 | 0.904 | 0.917 | 0.93 | | male | 225 | 0.922 | 0.862 | 0.967 | | psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 | The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence "Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion." Into the model, the model will classify it as toxic. Be mindful for this type of potential bias. ## Training data The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 10% of the `train.csv` data to train the model. ## Training procedure You can see [this documentation and codes](https://github.com/MSIA/wenyang_pan_nlp_project_2021) for how we train the model. It takes about 3 hours in a P-100 GPU. ## Evaluation results The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set.
microsoft/table-transformer-structure-recognition-v1.1-all
microsoft
"2023-11-18T21:58:10Z"
918,373
57
transformers
[ "transformers", "safetensors", "table-transformer", "object-detection", "arxiv:2303.00716", "license:mit", "endpoints_compatible", "region:us" ]
object-detection
"2023-11-18T21:33:25Z"
--- license: mit --- # Table Transformer (pre-trained for Table Structure Recognition) Table Transformer (TATR) model trained on PubTables1M and FinTabNet.c. It was introduced in the paper [Aligning benchmark datasets for table structure recognition](https://arxiv.org/abs/2303.00716) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer). Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention. ## Usage You can use the raw model for detecting tables in documents. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
thenlper/gte-large
thenlper
"2024-02-05T07:16:01Z"
904,784
249
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "mteb", "sentence-similarity", "Sentence Transformers", "en", "arxiv:2308.03281", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-07-27T09:55:39Z"
--- tags: - mteb - sentence-similarity - sentence-transformers - Sentence Transformers model-index: - name: gte-large results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 72.62686567164178 - type: ap value: 34.46944126809772 - type: f1 value: 66.23684353950857 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.51805 - type: ap value: 89.49842783330848 - type: f1 value: 92.51112169431808 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.074 - type: f1 value: 48.44785682572955 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 32.077 - type: map_at_10 value: 48.153 - type: map_at_100 value: 48.963 - type: map_at_1000 value: 48.966 - type: map_at_3 value: 43.184 - type: map_at_5 value: 46.072 - type: mrr_at_1 value: 33.073 - type: mrr_at_10 value: 48.54 - type: mrr_at_100 value: 49.335 - type: mrr_at_1000 value: 49.338 - type: mrr_at_3 value: 43.563 - type: mrr_at_5 value: 46.383 - type: ndcg_at_1 value: 32.077 - type: ndcg_at_10 value: 57.158 - type: ndcg_at_100 value: 60.324999999999996 - type: ndcg_at_1000 value: 60.402 - type: ndcg_at_3 value: 46.934 - type: ndcg_at_5 value: 52.158 - type: precision_at_1 value: 32.077 - type: precision_at_10 value: 8.591999999999999 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.275000000000002 - type: precision_at_5 value: 14.111 - type: recall_at_1 value: 32.077 - type: recall_at_10 value: 85.917 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 57.824 - type: recall_at_5 value: 70.555 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.619246083417295 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.3574067664688 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.06359661829253 - type: mrr value: 76.15596007562766 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 90.25407547368691 - type: cos_sim_spearman value: 88.65081514968477 - type: euclidean_pearson value: 88.14857116664494 - type: euclidean_spearman value: 88.50683596540692 - type: manhattan_pearson value: 87.9654797992225 - type: manhattan_spearman value: 88.21164851646908 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.05844155844157 - type: f1 value: 86.01555597681825 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.10510519739522 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.84689960264385 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.800000000000004 - type: map_at_10 value: 44.857 - type: map_at_100 value: 46.512 - type: map_at_1000 value: 46.635 - type: map_at_3 value: 41.062 - type: map_at_5 value: 43.126 - type: mrr_at_1 value: 39.628 - type: mrr_at_10 value: 50.879 - type: mrr_at_100 value: 51.605000000000004 - type: mrr_at_1000 value: 51.641000000000005 - type: mrr_at_3 value: 48.14 - type: mrr_at_5 value: 49.835 - type: ndcg_at_1 value: 39.628 - type: ndcg_at_10 value: 51.819 - type: ndcg_at_100 value: 57.318999999999996 - type: ndcg_at_1000 value: 58.955999999999996 - type: ndcg_at_3 value: 46.409 - type: ndcg_at_5 value: 48.825 - type: precision_at_1 value: 39.628 - type: precision_at_10 value: 10.072000000000001 - type: precision_at_100 value: 1.625 - type: precision_at_1000 value: 0.21 - type: precision_at_3 value: 22.556 - type: precision_at_5 value: 16.309 - type: recall_at_1 value: 32.800000000000004 - type: recall_at_10 value: 65.078 - type: recall_at_100 value: 87.491 - type: recall_at_1000 value: 97.514 - type: recall_at_3 value: 49.561 - type: recall_at_5 value: 56.135999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.614 - type: map_at_10 value: 43.578 - type: map_at_100 value: 44.897 - type: map_at_1000 value: 45.023 - type: map_at_3 value: 40.282000000000004 - type: map_at_5 value: 42.117 - type: mrr_at_1 value: 40.510000000000005 - type: mrr_at_10 value: 49.428 - type: mrr_at_100 value: 50.068999999999996 - type: mrr_at_1000 value: 50.111000000000004 - type: mrr_at_3 value: 47.176 - type: mrr_at_5 value: 48.583999999999996 - type: ndcg_at_1 value: 40.510000000000005 - type: ndcg_at_10 value: 49.478 - type: ndcg_at_100 value: 53.852 - type: ndcg_at_1000 value: 55.782 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 47.19 - type: precision_at_1 value: 40.510000000000005 - type: precision_at_10 value: 9.363000000000001 - type: precision_at_100 value: 1.51 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 21.741 - type: precision_at_5 value: 15.465000000000002 - type: recall_at_1 value: 32.614 - type: recall_at_10 value: 59.782000000000004 - type: recall_at_100 value: 78.012 - type: recall_at_1000 value: 90.319 - type: recall_at_3 value: 46.825 - type: recall_at_5 value: 52.688 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 40.266000000000005 - type: map_at_10 value: 53.756 - type: map_at_100 value: 54.809 - type: map_at_1000 value: 54.855 - type: map_at_3 value: 50.073 - type: map_at_5 value: 52.293 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 57.116 - type: mrr_at_100 value: 57.767 - type: mrr_at_1000 value: 57.791000000000004 - type: mrr_at_3 value: 54.461999999999996 - type: mrr_at_5 value: 56.092 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 60.092 - type: ndcg_at_100 value: 64.034 - type: ndcg_at_1000 value: 64.937 - type: ndcg_at_3 value: 54.071000000000005 - type: ndcg_at_5 value: 57.254000000000005 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.799 - type: precision_at_100 value: 1.278 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.368000000000002 - type: precision_at_5 value: 16.89 - type: recall_at_1 value: 40.266000000000005 - type: recall_at_10 value: 75.41499999999999 - type: recall_at_100 value: 92.01700000000001 - type: recall_at_1000 value: 98.379 - type: recall_at_3 value: 59.476 - type: recall_at_5 value: 67.297 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.589 - type: map_at_10 value: 37.755 - type: map_at_100 value: 38.881 - type: map_at_1000 value: 38.954 - type: map_at_3 value: 34.759 - type: map_at_5 value: 36.544 - type: mrr_at_1 value: 30.734 - type: mrr_at_10 value: 39.742 - type: mrr_at_100 value: 40.774 - type: mrr_at_1000 value: 40.824 - type: mrr_at_3 value: 37.137 - type: mrr_at_5 value: 38.719 - type: ndcg_at_1 value: 30.734 - type: ndcg_at_10 value: 42.978 - type: ndcg_at_100 value: 48.309000000000005 - type: ndcg_at_1000 value: 50.068 - type: ndcg_at_3 value: 37.361 - type: ndcg_at_5 value: 40.268 - type: precision_at_1 value: 30.734 - type: precision_at_10 value: 6.565 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 15.744 - type: precision_at_5 value: 11.096 - type: recall_at_1 value: 28.589 - type: recall_at_10 value: 57.126999999999995 - type: recall_at_100 value: 81.051 - type: recall_at_1000 value: 94.027 - type: recall_at_3 value: 42.045 - type: recall_at_5 value: 49.019 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.5 - type: map_at_10 value: 27.950999999999997 - type: map_at_100 value: 29.186 - type: map_at_1000 value: 29.298000000000002 - type: map_at_3 value: 25.141000000000002 - type: map_at_5 value: 26.848 - type: mrr_at_1 value: 22.637 - type: mrr_at_10 value: 32.572 - type: mrr_at_100 value: 33.472 - type: mrr_at_1000 value: 33.533 - type: mrr_at_3 value: 29.747 - type: mrr_at_5 value: 31.482 - type: ndcg_at_1 value: 22.637 - type: ndcg_at_10 value: 33.73 - type: ndcg_at_100 value: 39.568 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.505999999999997 - type: ndcg_at_5 value: 31.255 - type: precision_at_1 value: 22.637 - type: precision_at_10 value: 6.281000000000001 - type: precision_at_100 value: 1.073 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 13.847000000000001 - type: precision_at_5 value: 10.224 - type: recall_at_1 value: 18.5 - type: recall_at_10 value: 46.744 - type: recall_at_100 value: 72.072 - type: recall_at_1000 value: 91.03999999999999 - type: recall_at_3 value: 32.551 - type: recall_at_5 value: 39.533 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.602 - type: map_at_10 value: 42.18 - type: map_at_100 value: 43.6 - type: map_at_1000 value: 43.704 - type: map_at_3 value: 38.413000000000004 - type: map_at_5 value: 40.626 - type: mrr_at_1 value: 37.344 - type: mrr_at_10 value: 47.638000000000005 - type: mrr_at_100 value: 48.485 - type: mrr_at_1000 value: 48.52 - type: mrr_at_3 value: 44.867000000000004 - type: mrr_at_5 value: 46.566 - type: ndcg_at_1 value: 37.344 - type: ndcg_at_10 value: 48.632 - type: ndcg_at_100 value: 54.215 - type: ndcg_at_1000 value: 55.981 - type: ndcg_at_3 value: 42.681999999999995 - type: ndcg_at_5 value: 45.732 - type: precision_at_1 value: 37.344 - type: precision_at_10 value: 8.932 - type: precision_at_100 value: 1.376 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 20.276 - type: precision_at_5 value: 14.726 - type: recall_at_1 value: 30.602 - type: recall_at_10 value: 62.273 - type: recall_at_100 value: 85.12100000000001 - type: recall_at_1000 value: 96.439 - type: recall_at_3 value: 45.848 - type: recall_at_5 value: 53.615 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.952 - type: map_at_10 value: 35.177 - type: map_at_100 value: 36.59 - type: map_at_1000 value: 36.703 - type: map_at_3 value: 31.261 - type: map_at_5 value: 33.222 - type: mrr_at_1 value: 29.337999999999997 - type: mrr_at_10 value: 40.152 - type: mrr_at_100 value: 40.963 - type: mrr_at_1000 value: 41.016999999999996 - type: mrr_at_3 value: 36.91 - type: mrr_at_5 value: 38.685 - type: ndcg_at_1 value: 29.337999999999997 - type: ndcg_at_10 value: 41.994 - type: ndcg_at_100 value: 47.587 - type: ndcg_at_1000 value: 49.791000000000004 - type: ndcg_at_3 value: 35.27 - type: ndcg_at_5 value: 38.042 - type: precision_at_1 value: 29.337999999999997 - type: precision_at_10 value: 8.276 - type: precision_at_100 value: 1.276 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 17.161 - type: precision_at_5 value: 12.671 - type: recall_at_1 value: 23.952 - type: recall_at_10 value: 57.267 - type: recall_at_100 value: 80.886 - type: recall_at_1000 value: 95.611 - type: recall_at_3 value: 38.622 - type: recall_at_5 value: 45.811 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.092083333333335 - type: map_at_10 value: 37.2925 - type: map_at_100 value: 38.57041666666666 - type: map_at_1000 value: 38.68141666666667 - type: map_at_3 value: 34.080000000000005 - type: map_at_5 value: 35.89958333333333 - type: mrr_at_1 value: 31.94758333333333 - type: mrr_at_10 value: 41.51049999999999 - type: mrr_at_100 value: 42.36099999999999 - type: mrr_at_1000 value: 42.4125 - type: mrr_at_3 value: 38.849583333333335 - type: mrr_at_5 value: 40.448249999999994 - type: ndcg_at_1 value: 31.94758333333333 - type: ndcg_at_10 value: 43.17633333333333 - type: ndcg_at_100 value: 48.45241666666668 - type: ndcg_at_1000 value: 50.513999999999996 - type: ndcg_at_3 value: 37.75216666666667 - type: ndcg_at_5 value: 40.393833333333326 - type: precision_at_1 value: 31.94758333333333 - type: precision_at_10 value: 7.688916666666666 - type: precision_at_100 value: 1.2250833333333333 - type: precision_at_1000 value: 0.1595 - type: precision_at_3 value: 17.465999999999998 - type: precision_at_5 value: 12.548083333333333 - type: recall_at_1 value: 27.092083333333335 - type: recall_at_10 value: 56.286583333333326 - type: recall_at_100 value: 79.09033333333333 - type: recall_at_1000 value: 93.27483333333335 - type: recall_at_3 value: 41.35325 - type: recall_at_5 value: 48.072750000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.825 - type: map_at_10 value: 33.723 - type: map_at_100 value: 34.74 - type: map_at_1000 value: 34.824 - type: map_at_3 value: 31.369000000000003 - type: map_at_5 value: 32.533 - type: mrr_at_1 value: 29.293999999999997 - type: mrr_at_10 value: 36.84 - type: mrr_at_100 value: 37.681 - type: mrr_at_1000 value: 37.742 - type: mrr_at_3 value: 34.79 - type: mrr_at_5 value: 35.872 - type: ndcg_at_1 value: 29.293999999999997 - type: ndcg_at_10 value: 38.385999999999996 - type: ndcg_at_100 value: 43.327 - type: ndcg_at_1000 value: 45.53 - type: ndcg_at_3 value: 33.985 - type: ndcg_at_5 value: 35.817 - type: precision_at_1 value: 29.293999999999997 - type: precision_at_10 value: 6.12 - type: precision_at_100 value: 0.9329999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 14.621999999999998 - type: precision_at_5 value: 10.030999999999999 - type: recall_at_1 value: 25.825 - type: recall_at_10 value: 49.647000000000006 - type: recall_at_100 value: 72.32300000000001 - type: recall_at_1000 value: 88.62400000000001 - type: recall_at_3 value: 37.366 - type: recall_at_5 value: 41.957 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.139 - type: map_at_10 value: 26.107000000000003 - type: map_at_100 value: 27.406999999999996 - type: map_at_1000 value: 27.535999999999998 - type: map_at_3 value: 23.445 - type: map_at_5 value: 24.916 - type: mrr_at_1 value: 21.817 - type: mrr_at_10 value: 29.99 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.128 - type: mrr_at_3 value: 27.627000000000002 - type: mrr_at_5 value: 29.005 - type: ndcg_at_1 value: 21.817 - type: ndcg_at_10 value: 31.135 - type: ndcg_at_100 value: 37.108000000000004 - type: ndcg_at_1000 value: 39.965 - type: ndcg_at_3 value: 26.439 - type: ndcg_at_5 value: 28.655 - type: precision_at_1 value: 21.817 - type: precision_at_10 value: 5.757000000000001 - type: precision_at_100 value: 1.036 - type: precision_at_1000 value: 0.147 - type: precision_at_3 value: 12.537 - type: precision_at_5 value: 9.229 - type: recall_at_1 value: 18.139 - type: recall_at_10 value: 42.272999999999996 - type: recall_at_100 value: 68.657 - type: recall_at_1000 value: 88.93799999999999 - type: recall_at_3 value: 29.266 - type: recall_at_5 value: 34.892 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.755000000000003 - type: map_at_10 value: 37.384 - type: map_at_100 value: 38.56 - type: map_at_1000 value: 38.655 - type: map_at_3 value: 34.214 - type: map_at_5 value: 35.96 - type: mrr_at_1 value: 32.369 - type: mrr_at_10 value: 41.625 - type: mrr_at_100 value: 42.449 - type: mrr_at_1000 value: 42.502 - type: mrr_at_3 value: 38.899 - type: mrr_at_5 value: 40.489999999999995 - type: ndcg_at_1 value: 32.369 - type: ndcg_at_10 value: 43.287 - type: ndcg_at_100 value: 48.504999999999995 - type: ndcg_at_1000 value: 50.552 - type: ndcg_at_3 value: 37.549 - type: ndcg_at_5 value: 40.204 - type: precision_at_1 value: 32.369 - type: precision_at_10 value: 7.425 - type: precision_at_100 value: 1.134 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 17.102 - type: precision_at_5 value: 12.107999999999999 - type: recall_at_1 value: 27.755000000000003 - type: recall_at_10 value: 57.071000000000005 - type: recall_at_100 value: 79.456 - type: recall_at_1000 value: 93.54299999999999 - type: recall_at_3 value: 41.298 - type: recall_at_5 value: 48.037 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.855 - type: map_at_10 value: 34.53 - type: map_at_100 value: 36.167 - type: map_at_1000 value: 36.394999999999996 - type: map_at_3 value: 31.037 - type: map_at_5 value: 33.119 - type: mrr_at_1 value: 30.631999999999998 - type: mrr_at_10 value: 39.763999999999996 - type: mrr_at_100 value: 40.77 - type: mrr_at_1000 value: 40.826 - type: mrr_at_3 value: 36.495 - type: mrr_at_5 value: 38.561 - type: ndcg_at_1 value: 30.631999999999998 - type: ndcg_at_10 value: 40.942 - type: ndcg_at_100 value: 47.07 - type: ndcg_at_1000 value: 49.363 - type: ndcg_at_3 value: 35.038000000000004 - type: ndcg_at_5 value: 38.161 - type: precision_at_1 value: 30.631999999999998 - type: precision_at_10 value: 7.983999999999999 - type: precision_at_100 value: 1.6070000000000002 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 16.206 - type: precision_at_5 value: 12.253 - type: recall_at_1 value: 24.855 - type: recall_at_10 value: 53.291999999999994 - type: recall_at_100 value: 80.283 - type: recall_at_1000 value: 94.309 - type: recall_at_3 value: 37.257 - type: recall_at_5 value: 45.282 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.208 - type: map_at_10 value: 30.512 - type: map_at_100 value: 31.496000000000002 - type: map_at_1000 value: 31.595000000000002 - type: map_at_3 value: 27.904 - type: map_at_5 value: 29.491 - type: mrr_at_1 value: 22.736 - type: mrr_at_10 value: 32.379999999999995 - type: mrr_at_100 value: 33.245000000000005 - type: mrr_at_1000 value: 33.315 - type: mrr_at_3 value: 29.945 - type: mrr_at_5 value: 31.488 - type: ndcg_at_1 value: 22.736 - type: ndcg_at_10 value: 35.643 - type: ndcg_at_100 value: 40.535 - type: ndcg_at_1000 value: 43.042 - type: ndcg_at_3 value: 30.625000000000004 - type: ndcg_at_5 value: 33.323 - type: precision_at_1 value: 22.736 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.889 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 13.431999999999999 - type: precision_at_5 value: 9.575 - type: recall_at_1 value: 21.208 - type: recall_at_10 value: 49.47 - type: recall_at_100 value: 71.71499999999999 - type: recall_at_1000 value: 90.55499999999999 - type: recall_at_3 value: 36.124 - type: recall_at_5 value: 42.606 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 11.363 - type: map_at_10 value: 20.312 - type: map_at_100 value: 22.225 - type: map_at_1000 value: 22.411 - type: map_at_3 value: 16.68 - type: map_at_5 value: 18.608 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 37.933 - type: mrr_at_100 value: 38.875 - type: mrr_at_1000 value: 38.911 - type: mrr_at_3 value: 34.387 - type: mrr_at_5 value: 36.51 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 28.82 - type: ndcg_at_100 value: 36.341 - type: ndcg_at_1000 value: 39.615 - type: ndcg_at_3 value: 23.01 - type: ndcg_at_5 value: 25.269000000000002 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 9.153 - type: precision_at_100 value: 1.7319999999999998 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 17.22 - type: precision_at_5 value: 13.629 - type: recall_at_1 value: 11.363 - type: recall_at_10 value: 35.382999999999996 - type: recall_at_100 value: 61.367000000000004 - type: recall_at_1000 value: 79.699 - type: recall_at_3 value: 21.495 - type: recall_at_5 value: 27.42 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.65 - type: map_at_10 value: 20.742 - type: map_at_100 value: 29.614 - type: map_at_1000 value: 31.373 - type: map_at_3 value: 14.667 - type: map_at_5 value: 17.186 - type: mrr_at_1 value: 69.75 - type: mrr_at_10 value: 76.762 - type: mrr_at_100 value: 77.171 - type: mrr_at_1000 value: 77.179 - type: mrr_at_3 value: 75.125 - type: mrr_at_5 value: 76.287 - type: ndcg_at_1 value: 57.62500000000001 - type: ndcg_at_10 value: 42.370999999999995 - type: ndcg_at_100 value: 47.897 - type: ndcg_at_1000 value: 55.393 - type: ndcg_at_3 value: 46.317 - type: ndcg_at_5 value: 43.906 - type: precision_at_1 value: 69.75 - type: precision_at_10 value: 33.95 - type: precision_at_100 value: 10.885 - type: precision_at_1000 value: 2.2239999999999998 - type: precision_at_3 value: 49.75 - type: precision_at_5 value: 42.3 - type: recall_at_1 value: 9.65 - type: recall_at_10 value: 26.117 - type: recall_at_100 value: 55.084 - type: recall_at_1000 value: 78.62400000000001 - type: recall_at_3 value: 15.823 - type: recall_at_5 value: 19.652 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.885 - type: f1 value: 42.99567641346983 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 70.97 - type: map_at_10 value: 80.34599999999999 - type: map_at_100 value: 80.571 - type: map_at_1000 value: 80.584 - type: map_at_3 value: 79.279 - type: map_at_5 value: 79.94 - type: mrr_at_1 value: 76.613 - type: mrr_at_10 value: 85.15700000000001 - type: mrr_at_100 value: 85.249 - type: mrr_at_1000 value: 85.252 - type: mrr_at_3 value: 84.33800000000001 - type: mrr_at_5 value: 84.89 - type: ndcg_at_1 value: 76.613 - type: ndcg_at_10 value: 84.53399999999999 - type: ndcg_at_100 value: 85.359 - type: ndcg_at_1000 value: 85.607 - type: ndcg_at_3 value: 82.76599999999999 - type: ndcg_at_5 value: 83.736 - type: precision_at_1 value: 76.613 - type: precision_at_10 value: 10.206 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 31.913000000000004 - type: precision_at_5 value: 19.769000000000002 - type: recall_at_1 value: 70.97 - type: recall_at_10 value: 92.674 - type: recall_at_100 value: 95.985 - type: recall_at_1000 value: 97.57000000000001 - type: recall_at_3 value: 87.742 - type: recall_at_5 value: 90.28 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.494 - type: map_at_10 value: 36.491 - type: map_at_100 value: 38.550000000000004 - type: map_at_1000 value: 38.726 - type: map_at_3 value: 31.807000000000002 - type: map_at_5 value: 34.299 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.146 - type: mrr_at_100 value: 54.013999999999996 - type: mrr_at_1000 value: 54.044000000000004 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.124 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 44.499 - type: ndcg_at_100 value: 51.629000000000005 - type: ndcg_at_1000 value: 54.367 - type: ndcg_at_3 value: 40.900999999999996 - type: ndcg_at_5 value: 41.737 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.346 - type: precision_at_100 value: 1.974 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 27.366 - type: precision_at_5 value: 19.846 - type: recall_at_1 value: 22.494 - type: recall_at_10 value: 51.156 - type: recall_at_100 value: 77.11200000000001 - type: recall_at_1000 value: 93.44 - type: recall_at_3 value: 36.574 - type: recall_at_5 value: 42.361 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 38.568999999999996 - type: map_at_10 value: 58.485 - type: map_at_100 value: 59.358999999999995 - type: map_at_1000 value: 59.429 - type: map_at_3 value: 55.217000000000006 - type: map_at_5 value: 57.236 - type: mrr_at_1 value: 77.137 - type: mrr_at_10 value: 82.829 - type: mrr_at_100 value: 83.04599999999999 - type: mrr_at_1000 value: 83.05399999999999 - type: mrr_at_3 value: 81.904 - type: mrr_at_5 value: 82.50800000000001 - type: ndcg_at_1 value: 77.137 - type: ndcg_at_10 value: 67.156 - type: ndcg_at_100 value: 70.298 - type: ndcg_at_1000 value: 71.65700000000001 - type: ndcg_at_3 value: 62.535 - type: ndcg_at_5 value: 65.095 - type: precision_at_1 value: 77.137 - type: precision_at_10 value: 13.911999999999999 - type: precision_at_100 value: 1.6389999999999998 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.572 - type: precision_at_5 value: 25.766 - type: recall_at_1 value: 38.568999999999996 - type: recall_at_10 value: 69.56099999999999 - type: recall_at_100 value: 81.931 - type: recall_at_1000 value: 90.91799999999999 - type: recall_at_3 value: 59.358999999999995 - type: recall_at_5 value: 64.416 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 88.45600000000002 - type: ap value: 84.09725115338568 - type: f1 value: 88.41874909080512 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.404999999999998 - type: map_at_10 value: 33.921 - type: map_at_100 value: 35.116 - type: map_at_1000 value: 35.164 - type: map_at_3 value: 30.043999999999997 - type: map_at_5 value: 32.327 - type: mrr_at_1 value: 21.977 - type: mrr_at_10 value: 34.505 - type: mrr_at_100 value: 35.638999999999996 - type: mrr_at_1000 value: 35.68 - type: mrr_at_3 value: 30.703999999999997 - type: mrr_at_5 value: 32.96 - type: ndcg_at_1 value: 21.963 - type: ndcg_at_10 value: 40.859 - type: ndcg_at_100 value: 46.614 - type: ndcg_at_1000 value: 47.789 - type: ndcg_at_3 value: 33.007999999999996 - type: ndcg_at_5 value: 37.084 - type: precision_at_1 value: 21.963 - type: precision_at_10 value: 6.493 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.155000000000001 - type: precision_at_5 value: 10.544 - type: recall_at_1 value: 21.404999999999998 - type: recall_at_10 value: 62.175000000000004 - type: recall_at_100 value: 88.786 - type: recall_at_1000 value: 97.738 - type: recall_at_3 value: 40.925 - type: recall_at_5 value: 50.722 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.50661194710442 - type: f1 value: 93.30311193153668 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.24669402644778 - type: f1 value: 54.23122108002977 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.61936785474109 - type: f1 value: 70.52644941025565 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.76529926025555 - type: f1 value: 77.26872729322514 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.39450293021839 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.757796879839294 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.62512146657428 - type: mrr value: 33.84624322066173 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.462 - type: map_at_10 value: 14.947 - type: map_at_100 value: 19.344 - type: map_at_1000 value: 20.933 - type: map_at_3 value: 10.761999999999999 - type: map_at_5 value: 12.744 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.365 - type: mrr_at_100 value: 57.931 - type: mrr_at_1000 value: 57.96 - type: mrr_at_3 value: 54.85 - type: mrr_at_5 value: 56.569 - type: ndcg_at_1 value: 46.129999999999995 - type: ndcg_at_10 value: 38.173 - type: ndcg_at_100 value: 35.983 - type: ndcg_at_1000 value: 44.507000000000005 - type: ndcg_at_3 value: 42.495 - type: ndcg_at_5 value: 41.019 - type: precision_at_1 value: 47.678 - type: precision_at_10 value: 28.731 - type: precision_at_100 value: 9.232 - type: precision_at_1000 value: 2.202 - type: precision_at_3 value: 39.628 - type: precision_at_5 value: 35.851 - type: recall_at_1 value: 6.462 - type: recall_at_10 value: 18.968 - type: recall_at_100 value: 37.131 - type: recall_at_1000 value: 67.956 - type: recall_at_3 value: 11.905000000000001 - type: recall_at_5 value: 15.097 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 30.335 - type: map_at_10 value: 46.611999999999995 - type: map_at_100 value: 47.632000000000005 - type: map_at_1000 value: 47.661 - type: map_at_3 value: 41.876999999999995 - type: map_at_5 value: 44.799 - type: mrr_at_1 value: 34.125 - type: mrr_at_10 value: 49.01 - type: mrr_at_100 value: 49.75 - type: mrr_at_1000 value: 49.768 - type: mrr_at_3 value: 45.153 - type: mrr_at_5 value: 47.589999999999996 - type: ndcg_at_1 value: 34.125 - type: ndcg_at_10 value: 54.777 - type: ndcg_at_100 value: 58.914 - type: ndcg_at_1000 value: 59.521 - type: ndcg_at_3 value: 46.015 - type: ndcg_at_5 value: 50.861000000000004 - type: precision_at_1 value: 34.125 - type: precision_at_10 value: 9.166 - type: precision_at_100 value: 1.149 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 21.147 - type: precision_at_5 value: 15.469 - type: recall_at_1 value: 30.335 - type: recall_at_10 value: 77.194 - type: recall_at_100 value: 94.812 - type: recall_at_1000 value: 99.247 - type: recall_at_3 value: 54.681000000000004 - type: recall_at_5 value: 65.86800000000001 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.62 - type: map_at_10 value: 84.536 - type: map_at_100 value: 85.167 - type: map_at_1000 value: 85.184 - type: map_at_3 value: 81.607 - type: map_at_5 value: 83.423 - type: mrr_at_1 value: 81.36 - type: mrr_at_10 value: 87.506 - type: mrr_at_100 value: 87.601 - type: mrr_at_1000 value: 87.601 - type: mrr_at_3 value: 86.503 - type: mrr_at_5 value: 87.179 - type: ndcg_at_1 value: 81.36 - type: ndcg_at_10 value: 88.319 - type: ndcg_at_100 value: 89.517 - type: ndcg_at_1000 value: 89.60900000000001 - type: ndcg_at_3 value: 85.423 - type: ndcg_at_5 value: 86.976 - type: precision_at_1 value: 81.36 - type: precision_at_10 value: 13.415 - type: precision_at_100 value: 1.529 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.342999999999996 - type: precision_at_5 value: 24.534 - type: recall_at_1 value: 70.62 - type: recall_at_10 value: 95.57600000000001 - type: recall_at_100 value: 99.624 - type: recall_at_1000 value: 99.991 - type: recall_at_3 value: 87.22 - type: recall_at_5 value: 91.654 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.826438478212744 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.24027467551447 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.997999999999999 - type: map_at_10 value: 14.267 - type: map_at_100 value: 16.843 - type: map_at_1000 value: 17.229 - type: map_at_3 value: 9.834 - type: map_at_5 value: 11.92 - type: mrr_at_1 value: 24.7 - type: mrr_at_10 value: 37.685 - type: mrr_at_100 value: 38.704 - type: mrr_at_1000 value: 38.747 - type: mrr_at_3 value: 34.150000000000006 - type: mrr_at_5 value: 36.075 - type: ndcg_at_1 value: 24.7 - type: ndcg_at_10 value: 23.44 - type: ndcg_at_100 value: 32.617000000000004 - type: ndcg_at_1000 value: 38.628 - type: ndcg_at_3 value: 21.747 - type: ndcg_at_5 value: 19.076 - type: precision_at_1 value: 24.7 - type: precision_at_10 value: 12.47 - type: precision_at_100 value: 2.564 - type: precision_at_1000 value: 0.4 - type: precision_at_3 value: 20.767 - type: precision_at_5 value: 17.06 - type: recall_at_1 value: 4.997999999999999 - type: recall_at_10 value: 25.3 - type: recall_at_100 value: 52.048 - type: recall_at_1000 value: 81.093 - type: recall_at_3 value: 12.642999999999999 - type: recall_at_5 value: 17.312 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.44942006292234 - type: cos_sim_spearman value: 79.80930790660699 - type: euclidean_pearson value: 82.93400777494863 - type: euclidean_spearman value: 80.04664991110705 - type: manhattan_pearson value: 82.93551681854949 - type: manhattan_spearman value: 80.03156736837379 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.63574059135726 - type: cos_sim_spearman value: 76.80552915288186 - type: euclidean_pearson value: 82.46368529820518 - type: euclidean_spearman value: 76.60338474719275 - type: manhattan_pearson value: 82.4558617035968 - type: manhattan_spearman value: 76.57936082895705 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.24116811084211 - type: cos_sim_spearman value: 88.10998662068769 - type: euclidean_pearson value: 87.04961732352689 - type: euclidean_spearman value: 88.12543945864087 - type: manhattan_pearson value: 86.9905224528854 - type: manhattan_spearman value: 88.07827944705546 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 84.74847296555048 - type: cos_sim_spearman value: 82.66200957916445 - type: euclidean_pearson value: 84.48132256004965 - type: euclidean_spearman value: 82.67915286000596 - type: manhattan_pearson value: 84.44950477268334 - type: manhattan_spearman value: 82.63327639173352 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.23056258027053 - type: cos_sim_spearman value: 88.92791680286955 - type: euclidean_pearson value: 88.13819235461933 - type: euclidean_spearman value: 88.87294661361716 - type: manhattan_pearson value: 88.14212133687899 - type: manhattan_spearman value: 88.88551854529777 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.64179522732887 - type: cos_sim_spearman value: 84.25028809903114 - type: euclidean_pearson value: 83.40175015236979 - type: euclidean_spearman value: 84.23369296429406 - type: manhattan_pearson value: 83.43768174261321 - type: manhattan_spearman value: 84.27855229214734 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.20378955494732 - type: cos_sim_spearman value: 88.46863559173111 - type: euclidean_pearson value: 88.8249295811663 - type: euclidean_spearman value: 88.6312737724905 - type: manhattan_pearson value: 88.87744466378827 - type: manhattan_spearman value: 88.82908423767314 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.91342028796086 - type: cos_sim_spearman value: 69.71495021867864 - type: euclidean_pearson value: 70.65334330405646 - type: euclidean_spearman value: 69.4321253472211 - type: manhattan_pearson value: 70.59743494727465 - type: manhattan_spearman value: 69.11695509297482 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.42451709766952 - type: cos_sim_spearman value: 86.07166710670508 - type: euclidean_pearson value: 86.12711421258899 - type: euclidean_spearman value: 86.05232086925126 - type: manhattan_pearson value: 86.15591089932126 - type: manhattan_spearman value: 86.0890128623439 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.1976344717285 - type: mrr value: 96.3703145075694 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 59.511 - type: map_at_10 value: 69.724 - type: map_at_100 value: 70.208 - type: map_at_1000 value: 70.22800000000001 - type: map_at_3 value: 66.986 - type: map_at_5 value: 68.529 - type: mrr_at_1 value: 62.333000000000006 - type: mrr_at_10 value: 70.55 - type: mrr_at_100 value: 70.985 - type: mrr_at_1000 value: 71.004 - type: mrr_at_3 value: 68.611 - type: mrr_at_5 value: 69.728 - type: ndcg_at_1 value: 62.333000000000006 - type: ndcg_at_10 value: 74.265 - type: ndcg_at_100 value: 76.361 - type: ndcg_at_1000 value: 76.82900000000001 - type: ndcg_at_3 value: 69.772 - type: ndcg_at_5 value: 71.94800000000001 - type: precision_at_1 value: 62.333000000000006 - type: precision_at_10 value: 9.9 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.444000000000003 - type: precision_at_5 value: 18 - type: recall_at_1 value: 59.511 - type: recall_at_10 value: 87.156 - type: recall_at_100 value: 96.5 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 75.2 - type: recall_at_5 value: 80.661 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81683168316832 - type: cos_sim_ap value: 95.74716566563774 - type: cos_sim_f1 value: 90.64238745574103 - type: cos_sim_precision value: 91.7093142272262 - type: cos_sim_recall value: 89.60000000000001 - type: dot_accuracy value: 99.69405940594059 - type: dot_ap value: 91.09013507754594 - type: dot_f1 value: 84.54227113556779 - type: dot_precision value: 84.58458458458459 - type: dot_recall value: 84.5 - type: euclidean_accuracy value: 99.81782178217821 - type: euclidean_ap value: 95.6324301072609 - type: euclidean_f1 value: 90.58341862845445 - type: euclidean_precision value: 92.76729559748428 - type: euclidean_recall value: 88.5 - type: manhattan_accuracy value: 99.81980198019802 - type: manhattan_ap value: 95.68510494437183 - type: manhattan_f1 value: 90.58945191313342 - type: manhattan_precision value: 93.79014989293361 - type: manhattan_recall value: 87.6 - type: max_accuracy value: 99.81980198019802 - type: max_ap value: 95.74716566563774 - type: max_f1 value: 90.64238745574103 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 67.63761899427078 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.572473369697235 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.63000245208579 - type: mrr value: 54.504193722943725 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.300791939416545 - type: cos_sim_spearman value: 31.662904057924123 - type: dot_pearson value: 26.21198530758316 - type: dot_spearman value: 27.006921548904263 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.197 - type: map_at_10 value: 1.752 - type: map_at_100 value: 10.795 - type: map_at_1000 value: 27.18 - type: map_at_3 value: 0.5890000000000001 - type: map_at_5 value: 0.938 - type: mrr_at_1 value: 74 - type: mrr_at_10 value: 85.833 - type: mrr_at_100 value: 85.833 - type: mrr_at_1000 value: 85.833 - type: mrr_at_3 value: 85.333 - type: mrr_at_5 value: 85.833 - type: ndcg_at_1 value: 69 - type: ndcg_at_10 value: 70.22 - type: ndcg_at_100 value: 55.785 - type: ndcg_at_1000 value: 52.93600000000001 - type: ndcg_at_3 value: 72.084 - type: ndcg_at_5 value: 71.184 - type: precision_at_1 value: 74 - type: precision_at_10 value: 75.2 - type: precision_at_100 value: 57.3 - type: precision_at_1000 value: 23.302 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 75.6 - type: recall_at_1 value: 0.197 - type: recall_at_10 value: 2.019 - type: recall_at_100 value: 14.257 - type: recall_at_1000 value: 50.922 - type: recall_at_3 value: 0.642 - type: recall_at_5 value: 1.043 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.803 - type: map_at_10 value: 10.407 - type: map_at_100 value: 16.948 - type: map_at_1000 value: 18.424 - type: map_at_3 value: 5.405 - type: map_at_5 value: 6.908 - type: mrr_at_1 value: 36.735 - type: mrr_at_10 value: 50.221000000000004 - type: mrr_at_100 value: 51.388 - type: mrr_at_1000 value: 51.402 - type: mrr_at_3 value: 47.278999999999996 - type: mrr_at_5 value: 49.626 - type: ndcg_at_1 value: 34.694 - type: ndcg_at_10 value: 25.507 - type: ndcg_at_100 value: 38.296 - type: ndcg_at_1000 value: 49.492000000000004 - type: ndcg_at_3 value: 29.006999999999998 - type: ndcg_at_5 value: 25.979000000000003 - type: precision_at_1 value: 36.735 - type: precision_at_10 value: 22.041 - type: precision_at_100 value: 8.02 - type: precision_at_1000 value: 1.567 - type: precision_at_3 value: 28.571 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.803 - type: recall_at_10 value: 16.378 - type: recall_at_100 value: 50.489 - type: recall_at_1000 value: 85.013 - type: recall_at_3 value: 6.505 - type: recall_at_5 value: 9.243 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.55579999999999 - type: ap value: 14.206982753316227 - type: f1 value: 54.372142814964285 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.57611771363893 - type: f1 value: 56.924172639063144 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 52.82304915719759 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.92716218632653 - type: cos_sim_ap value: 73.73359122546046 - type: cos_sim_f1 value: 68.42559487116262 - type: cos_sim_precision value: 64.22124508215691 - type: cos_sim_recall value: 73.21899736147758 - type: dot_accuracy value: 80.38981939560112 - type: dot_ap value: 54.61060862444974 - type: dot_f1 value: 53.45710627400769 - type: dot_precision value: 44.87638839125761 - type: dot_recall value: 66.09498680738787 - type: euclidean_accuracy value: 86.02849138701794 - type: euclidean_ap value: 73.95673761922404 - type: euclidean_f1 value: 68.6783042394015 - type: euclidean_precision value: 65.1063829787234 - type: euclidean_recall value: 72.66490765171504 - type: manhattan_accuracy value: 85.9808070572808 - type: manhattan_ap value: 73.9050720058029 - type: manhattan_f1 value: 68.57560618983794 - type: manhattan_precision value: 63.70839936608558 - type: manhattan_recall value: 74.24802110817942 - type: max_accuracy value: 86.02849138701794 - type: max_ap value: 73.95673761922404 - type: max_f1 value: 68.6783042394015 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.72783017037295 - type: cos_sim_ap value: 85.52705223340233 - type: cos_sim_f1 value: 77.91659078492079 - type: cos_sim_precision value: 73.93378032764221 - type: cos_sim_recall value: 82.35294117647058 - type: dot_accuracy value: 85.41739434159972 - type: dot_ap value: 77.17734818118443 - type: dot_f1 value: 71.63473589973144 - type: dot_precision value: 66.96123719622415 - type: dot_recall value: 77.00954727440714 - type: euclidean_accuracy value: 88.68125897465751 - type: euclidean_ap value: 85.47712213906692 - type: euclidean_f1 value: 77.81419950830664 - type: euclidean_precision value: 75.37162649733006 - type: euclidean_recall value: 80.42038805050817 - type: manhattan_accuracy value: 88.67349710870494 - type: manhattan_ap value: 85.46506475241955 - type: manhattan_f1 value: 77.87259084890393 - type: manhattan_precision value: 74.54929577464789 - type: manhattan_recall value: 81.50600554357868 - type: max_accuracy value: 88.72783017037295 - type: max_ap value: 85.52705223340233 - type: max_f1 value: 77.91659078492079 language: - en license: mit --- # gte-large General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281) The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Metrics We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 | | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 | | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 | | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 | | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 | | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 | ## Usage Code example ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-large") model = AutoModel.from_pretrained("thenlper/gte-large") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('thenlper/gte-large') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens. ### Citation If you find our paper or models helpful, please consider citing them as follows: ``` @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
sentence-transformers/paraphrase-mpnet-base-v2
sentence-transformers
"2024-11-05T15:18:48Z"
899,468
36
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "onnx", "safetensors", "openvino", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "doi:10.57967/hf/2004", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # sentence-transformers/paraphrase-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-mpnet-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
unslothai/other
unslothai
"2024-07-07T16:49:26Z"
899,123
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-07-07T16:49:02Z"
--- library_name: transformers tags: [] ---
BridgeTower/bridgetower-large-itm-mlm-itc
BridgeTower
"2023-03-08T22:33:21Z"
892,598
9
transformers
[ "transformers", "pytorch", "bridgetower", "gaudi", "en", "dataset:conceptual_captions", "dataset:conceptual_12m", "dataset:sbu_captions", "dataset:visual_genome", "dataset:mscoco_captions", "arxiv:2206.08657", "arxiv:1504.00325", "license:mit", "endpoints_compatible", "region:us" ]
null
"2023-02-11T00:25:58Z"
--- language: en tags: - bridgetower - gaudi license: mit datasets: - conceptual_captions - conceptual_12m - sbu_captions - visual_genome - mscoco_captions --- # BridgeTower large-itm-mlm-itc model The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in [this repository](https://github.com/microsoft/BridgeTower). BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/). ## Model description The abstract from the paper is the following: Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. ## Intended uses & limitations ### How to use Here is how to use this model to perform contrastive learning between image and text pairs: ```python from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning import requests from PIL import Image import torch image_urls = [ "https://farm4.staticflickr.com/3395/3428278415_81c3e27f15_z.jpg",    "http://images.cocodataset.org/val2017/000000039769.jpg"] texts = [ "two dogs in a car", "two cats sleeping on a couch"] images = [Image.open(requests.get(url, stream=True).raw) for url in image_urls] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm") model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") inputs  = processor(images, texts, padding=True, return_tensors="pt") outputs = model(**inputs) inputs  = processor(images, texts[::-1], padding=True, return_tensors="pt") outputs_swapped = model(**inputs) print('Loss', outputs.loss.item()) # Loss 0.00191505195107311 print('Loss with swapped images', outputs_swapped.loss.item()) # Loss with swapped images 2.1259872913360596 ``` Here is how to use this model to perform image and text matching ```python from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # forward pass scores = dict() for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0,1].item() ``` Here is how to use this model to perform masked language modeling: ```python from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000360943.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") text = "a <mask> looking out of the window" processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) print(results) #.a cat looking out of the window. ``` ## Training data The BridgeTower model was pretrained on four public image-caption datasets: - [Conceptual Captions (CC3M)](https://ai.google.com/research/ConceptualCaptions/) - [Conceptual 12M (CC12M)](https://github.com/google-research-datasets/conceptual-12m) - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/) - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf) - [Visual Genome](https://visualgenome.org/) The total number of unique images in the combined data is around 14M. ## Training procedure ### Pretraining The model was pre-trained for 10 epochs on an Intel AI supercomputing cluster using 512 Gaudis and 128 Xeons with a batch size of 2048. The optimizer used was AdamW with a learning rate of 1e-7. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 294 x 294. ## Evaluation results Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks. ### BibTeX entry and citation info ```bibtex @article{xu2022bridge, title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning}, author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan}, journal={arXiv preprint arXiv:2206.08657}, year={2022} } ```
WinKawaks/vit-small-patch16-224
WinKawaks
"2023-03-18T22:00:21Z"
877,493
14
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "vision", "dataset:imagenet", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224). Note that [safetensors] model requires torch 2.0 environment.
merve/sam2-hiera-large
merve
"2024-08-02T09:32:00Z"
868,434
2
sam2
[ "sam2", "mask-generation", "license:apache-2.0", "region:us" ]
mask-generation
"2024-08-02T08:43:57Z"
--- license: apache-2.0 pipeline_tag: mask-generation tags: - sam2 --- # SAM2-Hiera-large This repository contains large variant of SAM2 model. SAM2 is the state-of-the-art mask generation model released by Meta. ## Usage You can use it like below. First install packaged version of SAM2. ```bash pip install samv2 huggingface_hub ``` Each model requires different classes to infer. ```python from huggingface_hub import hf_hub_download from sam2.build_sam import build_sam2 from sam2.sam2_image_predictor import SAM2ImagePredictor hf_hub_download(repo_id = "merve/sam2-hiera-large", filename="sam2_hiera_large.pt", local_dir = "./") sam2_checkpoint = "../checkpoints/sam2_hiera_large.pt" model_cfg = "sam2_hiera_l.yaml" sam2_model = build_sam2(config, ckpt, device="cuda", apply_postprocessing=False) predictor = SAM2ImagePredictor(sam2_model) # it accepts coco format box = [x1, y1, w, h] predictor.set_image(image) masks = predictor.predict(box=box, multimask_output=False) ``` For automatic mask generation: ```python from huggingface_hub import hf_hub_download from sam2.build_sam import build_sam2 from sam2.automatic_mask_generator import SAM2AutomaticMaskGenerator hf_hub_download(repo_id = "merve/sam2-hiera-large", filename="sam2_hiera_large.pt", local_dir = "./") sam2_checkpoint = "../checkpoints/sam2_hiera_large.pt" model_cfg = "sam2_hiera_l.yaml" sam2 = build_sam2(model_cfg, sam2_checkpoint, device ='cuda', apply_postprocessing=False) mask_generator = SAM2AutomaticMaskGenerator(sam2) masks = mask_generator.generate(image) ``` ## Resources The team behind SAM2 made example notebooks for all tasks. - See [image predictor example](https://github.com/facebookresearch/segment-anything-2/blob/main/notebooks/image_predictor_example.ipynb) for full example on prompting. - See [automatic mask generation example](https://github.com/facebookresearch/segment-anything-2/blob/main/notebooks/automatic_mask_generator_example.ipynb) for generating all masks. - See [video object segmentation example](https://github.com/facebookresearch/segment-anything-2/blob/main/notebooks/video_predictor_example.ipynb)
facebook/dpr-ctx_encoder-single-nq-base
facebook
"2022-12-21T15:16:53Z"
866,412
23
transformers
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: en license: cc-by-nc-4.0 tags: - dpr datasets: - nq_open inference: false --- # `dpr-ctx_encoder-single-nq-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-single-nq-base` is the Context Encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import DPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] >>> embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-ctx_encoder-single-nq-base`, [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base), and [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that: > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
pysentimiento/robertuito-sentiment-analysis
pysentimiento
"2024-07-08T18:21:10Z"
863,738
75
pysentimiento
[ "pysentimiento", "pytorch", "tf", "safetensors", "roberta", "twitter", "sentiment-analysis", "text-classification", "es", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - es library_name: pysentimiento pipeline_tag: text-classification tags: - twitter - sentiment-analysis --- # Sentiment Analysis in Spanish ## robertuito-sentiment-analysis Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets. Uses `POS`, `NEG`, `NEU` labels. ## Usage Use it directly with [pysentimiento](https://github.com/pysentimiento/pysentimiento) ```python from pysentimiento import create_analyzer analyzer = create_analyzer(task="sentiment", lang="es") analyzer.predict("Qué gran jugador es Messi") # returns AnalyzerOutput(output=POS, probas={POS: 0.998, NEG: 0.002, NEU: 0.000}) ``` ## Results Results for the four tasks evaluated in `pysentimiento`. Results are expressed as Macro F1 scores | model | emotion | hate_speech | irony | sentiment | |:--------------|:--------------|:--------------|:--------------|:--------------| | robertuito | 0.560 ± 0.010 | 0.759 ± 0.007 | 0.739 ± 0.005 | 0.705 ± 0.003 | | roberta | 0.527 ± 0.015 | 0.741 ± 0.012 | 0.721 ± 0.008 | 0.670 ± 0.006 | | bertin | 0.524 ± 0.007 | 0.738 ± 0.007 | 0.713 ± 0.012 | 0.666 ± 0.005 | | beto_uncased | 0.532 ± 0.012 | 0.727 ± 0.016 | 0.701 ± 0.007 | 0.651 ± 0.006 | | beto_cased | 0.516 ± 0.012 | 0.724 ± 0.012 | 0.705 ± 0.009 | 0.662 ± 0.005 | | mbert_uncased | 0.493 ± 0.010 | 0.718 ± 0.011 | 0.681 ± 0.010 | 0.617 ± 0.003 | | biGRU | 0.264 ± 0.007 | 0.592 ± 0.018 | 0.631 ± 0.011 | 0.585 ± 0.011 | Note that for Hate Speech, these are the results for Semeval 2019, Task 5 Subtask B ## Citation If you use this model in your research, please cite pysentimiento, RoBERTuito and TASS papers: ```latex @article{perez2021pysentimiento, title={pysentimiento: a python toolkit for opinion mining and social NLP tasks}, author={P{\'e}rez, Juan Manuel and Rajngewerc, Mariela and Giudici, Juan Carlos and Furman, Dami{\'a}n A and Luque, Franco and Alemany, Laura Alonso and Mart{\'\i}nez, Mar{\'\i}a Vanina}, journal={arXiv preprint arXiv:2106.09462}, year={2021} } @inproceedings{perez-etal-2022-robertuito, title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish", author = "P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M.", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.785", pages = "7235--7243", abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.", } @inproceedings{garcia2020overview, title={Overview of TASS 2020: Introducing emotion detection}, author={Garc{\'\i}a-Vega, Manuel and D{\'\i}az-Galiano, MC and Garc{\'\i}a-Cumbreras, MA and Del Arco, FMP and Montejo-R{\'a}ez, A and Jim{\'e}nez-Zafra, SM and Mart{\'\i}nez C{\'a}mara, E and Aguilar, CA and Cabezudo, MAS and Chiruzzo, L and others}, booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020) Co-Located with 36th Conference of the Spanish Society for Natural Language Processing (SEPLN 2020), M{\'a}laga, Spain}, pages={163--170}, year={2020} } ```
theainerd/Wav2Vec2-large-xlsr-hindi
theainerd
"2023-05-31T18:52:14Z"
860,330
5
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "hi", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: - hi --- # Wav2Vec2-Large-XLSR-53-hindi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) hindi using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 72.62 % ## Training The script used for training can be found [Hindi ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1m-F7et3CHT_kpFqg7UffTIwnUV9AKgrg?usp=sharing)
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp
McGill-NLP
"2024-05-21T22:01:47Z"
853,978
10
transformers
[ "transformers", "safetensors", "mistral", "feature-extraction", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "custom_code", "en", "arxiv:2404.05961", "license:mit", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-04T02:59:33Z"
--- library_name: transformers license: mit language: - en pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) # Loading MNTP (Masked Next Token Prediction) model. model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.6266, 0.4199], [0.3429, 0.5240]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`parishad.behnamghader@mila.quebec`) and Vaibhav (`vaibhav.adlakha@mila.quebec`).
meta-llama/Llama-3.1-405B-Instruct
meta-llama
"2024-09-25T17:02:11Z"
852,950
520
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-405B", "base_model:finetune:meta-llama/Llama-3.1-405B", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-16T18:24:44Z"
--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers base_model: meta-llama/Meta-Llama-3.1-405B license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>46.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ### Tool use support LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
T-Systems-onsite
"2024-09-17T08:22:45Z"
852,369
58
transformers
[ "transformers", "pytorch", "tf", "safetensors", "xlm-roberta", "feature-extraction", "sentence_embedding", "search", "roberta", "xlm-r-distilroberta-base-paraphrase-v1", "paraphrase", "de", "en", "multilingual", "dataset:stsb_multi_mt", "arxiv:1908.10084", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: - de - en - multilingual license: mit tags: - sentence_embedding - search - pytorch - xlm-roberta - roberta - xlm-r-distilroberta-base-paraphrase-v1 - paraphrase datasets: - stsb_multi_mt metrics: - Spearman’s rank correlation - cosine similarity --- # Cross English & German RoBERTa for Sentence Embeddings This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers). The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below). > Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub. ## How to use To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>). ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer') ``` For details of usage and examples see here: - [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html) - [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html) - [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html) - [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html) - [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html) - [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples) ## Training The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509#issuecomment-712243280): >A paper is upcoming for the paraphrase models. > >These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc. > >In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains. > >More details with the setup, all the datasets, and a wider evaluation will follow soon. The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8> Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance. We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters: - batch_size = 8 - num_epochs = 2 - lr = 1.026343323298136e-05, - eps = 4.462251033010287e-06 - weight_decay = 0.04794438776350409 - warmup_steps_proportion = 0.1609010732760181 The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing. # Evaluation The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels. | Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) | |---------------------------------------------------------------|-------------------|--------------------|------------------| | xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 | | [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 | | xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 | | [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 | | [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 | | [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8355 | **0.8682** | 0.8309 | | **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | 0.8660 | **0.8525** | ## License Copyright (c) 2020 [Philip May](https://philipmay.org), T-Systems on site services GmbH Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository.
trpakov/vit-face-expression
trpakov
"2023-12-30T14:38:39Z"
849,200
46
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "vit", "image-classification", "doi:10.57967/hf/2289", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-11-09T12:50:30Z"
--- {} --- # Vision Transformer (ViT) for Facial Expression Recognition Model Card ## Model Overview - **Model Name:** [trpakov/vit-face-expression](https://huggingface.co/trpakov/vit-face-expression) - **Task:** Facial Expression/Emotion Recognition - **Dataset:** [FER2013](https://www.kaggle.com/datasets/msambare/fer2013) - **Model Architecture:** [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit) - **Finetuned from model:** [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) ## Model Description The vit-face-expression model is a Vision Transformer fine-tuned for the task of facial emotion recognition. It is trained on the FER2013 dataset, which consists of facial images categorized into seven different emotions: - Angry - Disgust - Fear - Happy - Sad - Surprise - Neutral ## Data Preprocessing The input images are preprocessed before being fed into the model. The preprocessing steps include: - **Resizing:** Images are resized to the specified input size. - **Normalization:** Pixel values are normalized to a specific range. - **Data Augmentation:** Random transformations such as rotations, flips, and zooms are applied to augment the training dataset. ## Evaluation Metrics - **Validation set accuracy:** 0.7113 - **Test set accuracy:** 0.7116 ## Limitations - **Data Bias:** The model's performance may be influenced by biases present in the training data. - **Generalization:** The model's ability to generalize to unseen data is subject to the diversity of the training dataset.
depth-anything/Depth-Anything-V2-Small-hf
depth-anything
"2024-07-05T11:38:31Z"
847,105
10
transformers
[ "transformers", "safetensors", "depth_anything", "depth-estimation", "depth", "relative depth", "arxiv:2406.09414", "arxiv:2401.10891", "license:apache-2.0", "endpoints_compatible", "region:us" ]
depth-estimation
"2024-06-18T10:01:15Z"
--- license: apache-2.0 tags: - depth - relative depth pipeline_tag: depth-estimation library: transformers widget: - inference: false --- # Depth Anything V2 Small – Transformers Version Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: - more fine-grained details than Depth Anything V1 - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard) - more efficient (10x faster) and more lightweight than SD-based models - impressive fine-tuned performance with our pre-trained models This model checkpoint is compatible with the transformers library. Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original Depth Anything release, but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The original Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al., and was first released in [this repository](https://github.com/LiheYoung/Depth-Anything). [Online demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2). ## Model description Depth Anything V2 leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone. The model is trained on ~600K synthetic labeled images and ~62 million real unlabeled images, obtaining state-of-the-art results for both relative and absolute depth estimation. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg" alt="drawing" width="600"/> <small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small> ## Intended uses & limitations You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for other versions on a task that interests you. ### How to use Here is how to use this model to perform zero-shot depth estimation: ```python from transformers import pipeline from PIL import Image import requests # load pipe pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Small-hf") # load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # inference depth = pipe(image)["depth"] ``` Alternatively, you can use the model and processor classes: ```python from transformers import AutoImageProcessor, AutoModelForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf") model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf") # prepare image for the model inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) ``` For more code examples, please refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#). ### Citation ```bibtex @misc{yang2024depth, title={Depth Anything V2}, author={Lihe Yang and Bingyi Kang and Zilong Huang and Zhen Zhao and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao}, year={2024}, eprint={2406.09414}, archivePrefix={arXiv}, primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'} } ```
sentence-transformers/distiluse-base-multilingual-cased-v2
sentence-transformers
"2024-11-05T16:54:29Z"
840,072
163
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "onnx", "safetensors", "openvino", "distilbert", "feature-extraction", "sentence-similarity", "multilingual", "ar", "bg", "ca", "cs", "da", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "it", "ja", "ka", "ko", "ku", "lt", "lv", "mk", "mn", "mr", "ms", "my", "nb", "nl", "pl", "pt", "ro", "ru", "sk", "sl", "sq", "sr", "sv", "th", "tr", "uk", "ur", "vi", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - multilingual - ar - bg - ca - cs - da - de - el - en - es - et - fa - fi - fr - gl - gu - he - hi - hr - hu - hy - id - it - ja - ka - ko - ku - lt - lv - mk - mn - mr - ms - my - nb - nl - pl - pt - ro - ru - sk - sl - sq - sr - sv - th - tr - uk - ur - vi license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity language_bcp47: - fr-ca - pt-br - zh-cn - zh-tw pipeline_tag: sentence-similarity --- # sentence-transformers/distiluse-base-multilingual-cased-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Xenova/bge-base-en-v1.5
Xenova
"2024-10-08T13:42:31Z"
836,471
6
transformers.js
[ "transformers.js", "onnx", "bert", "feature-extraction", "base_model:BAAI/bge-base-en-v1.5", "base_model:quantized:BAAI/bge-base-en-v1.5", "region:us" ]
feature-extraction
"2023-09-13T15:48:03Z"
--- base_model: BAAI/bge-base-en-v1.5 library_name: transformers.js --- https://huggingface.co/BAAI/bge-base-en-v1.5 with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` You can then use the model to compute embeddings, as follows: ```js import { pipeline } from '@xenova/transformers'; // Create a feature-extraction pipeline const extractor = await pipeline('feature-extraction', 'Xenova/bge-base-en-v1.5'); // Compute sentence embeddings const texts = ['Hello world.', 'Example sentence.']; const embeddings = await extractor(texts, { pooling: 'mean', normalize: true }); console.log(embeddings); // Tensor { // dims: [ 2, 768 ], // type: 'float32', // data: Float32Array(1536) [ 0.019079938530921936, 0.041718777269124985, ... ], // size: 1536 // } console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list // [ // [ 0.019079938530921936, 0.041718777269124985, 0.037672195583581924, ... ], // [ 0.020936904475092888, 0.020080938935279846, -0.00787576474249363, ... ] // ] ``` You can also use the model for retrieval. For example: ```js import { pipeline, cos_sim } from '@xenova/transformers'; // Create a feature-extraction pipeline const extractor = await pipeline('feature-extraction', 'Xenova/bge-small-en-v1.5'); // List of documents you want to embed const texts = [ 'Hello world.', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.', 'I love pandas so much!', ]; // Compute sentence embeddings const embeddings = await extractor(texts, { pooling: 'mean', normalize: true }); // Prepend recommended query instruction for retrieval. const query_prefix = 'Represent this sentence for searching relevant passages: ' const query = query_prefix + 'What is a panda?'; const query_embeddings = await extractor(query, { pooling: 'mean', normalize: true }); // Sort by cosine similarity score const scores = embeddings.tolist().map( (embedding, i) => ({ id: i, score: cos_sim(query_embeddings.data, embedding), text: texts[i], }) ).sort((a, b) => b.score - a.score); console.log(scores); // [ // { id: 1, score: 0.7787772374597298, text: 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.' }, // { id: 2, score: 0.7071589521880506, text: 'I love pandas so much!' }, // { id: 0, score: 0.4252782730390429, text: 'Hello world.' } // ] ``` Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
microsoft/wavlm-large
microsoft
"2022-02-02T21:21:50Z"
836,222
59
transformers
[ "transformers", "pytorch", "wavlm", "feature-extraction", "speech", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: - en tags: - speech inference: false --- # WavLM-Large [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The large model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Usage This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/). **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence of phonemes before fine-tuning. ## Speech Recognition To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition). ## Speech Classification To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification). ## Speaker Verification TODO ## Speaker Diarization TODO # Contribution The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten). # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
MoritzLaurer
"2024-04-11T13:47:27Z"
831,938
183
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:facebook/anli", "dataset:fever", "arxiv:2006.03654", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:04Z"
--- language: - en license: mit tags: - text-classification - zero-shot-classification datasets: - multi_nli - facebook/anli - fever metrics: - accuracy pipeline_tag: zero-shot-classification model-index: - name: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli results: - task: type: natural-language-inference name: Natural Language Inference dataset: name: anli type: anli config: plain_text split: test_r3 metrics: - type: accuracy value: 0.495 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWViYjQ5YTZlYjU4NjQyN2NhOTVhNjFjNGQyMmFiNmQyZjRkOTdhNzJmNjc3NGU4MmY0MjYyMzY5MjZhYzE0YiIsInZlcnNpb24iOjF9.S8pIQ7gEGokd_wKXMi6Bc3B2DThIP3cvVkTFErZ-2JxXTSCy1TBuulY3dzGfaiP7kTHbL52OuBhG_-wb7Ue9DQ - type: precision value: 0.4984740618243923 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTllZDU3NmVmYjk4ZmYzNjAwNzExMGZjNDMzOWRkZjRjMTRhNzhlZmI0ZmNlM2E0Mzk4OWE5NTM5MTYyYWU5NCIsInZlcnNpb24iOjF9.WHz_TUJgPVn-rU-9vBCDdmSMOuWzADwr09rJY6ktqRM46zytbyWs7Vcm7jqDrTkfU-rp0_7IyoNv_xEsKhJbBA - type: precision value: 0.495 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjllODE3ZjUxZDhiMTI0MzZmYjY5OTUwYWI2OTc4ZjJhNTVjMjY2ODdkMmJlZjQ5YWQ1Mjk2ZThmYjJlM2RlYSIsInZlcnNpb24iOjF9.a9V06-O7l9S0Bv4vj0aard8128SAP61DZdXl_3XqdmNgt_C6KAoDBVueF2M2kF_kT6lRfEz6YW0ACIfJNXDYAA - type: precision value: 0.4984357572868885 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjhiMzYzY2JiMmYwN2YxYzEwZTQ3NGI1NzFmMzliNjJkMDE2YzI5Njg1ZjEzMGIxODdiMDNmYmI4Y2Y2MmJkMiIsInZlcnNpb24iOjF9.xvZZaUMogw9MJjb3ls6h5liDlTqHMmNgqk6KbyDqQWfCcD255brCU3Xo6nECwaChS4te0dQu_iWGBqR_o2kYAA - type: recall value: 0.49461028192371476 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDVjYTEzOTI0ZjVhOTk3ZTkzZmZhNTk5ODcxMWJhYWU4ZTRjYWVhNzcwOWY5YmI2NGFlYWE4NjM5MDY5NTExOSIsInZlcnNpb24iOjF9.xgHCB2rbCQBzHzUokw4u8JyOdhtF4yvPv1t8t7YiEkaAuM5MAPsVuCZ1VtlLapHS_IWetlocizsVl6akjh3cAQ - type: recall value: 0.495 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTEyYmM0ZDQ0M2RiMDNhNjIxNzQ4OWZiNTBiOTAwZDFkNjNmYjBhNjA4NmQ0NjFkNmNiZTljNDkxNDg3NzIyYSIsInZlcnNpb24iOjF9.3FJPwNtwgFNvMjVxVAayaVXXR1sWlr0sqAYmXzmMzMxl7IJh6RS77dGPwFaqD3jamLVBiqPn9wsfz5lFK5yTAA - type: recall value: 0.495 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmY1MjZlZTQ4OTg5YzdlYmFhZDMzMmNlNjNkYmIyZGI4M2NjZjQ1ZDVkNmZkMTUxNjI3M2UwZmI1MDM1NDYwOSIsInZlcnNpb24iOjF9.cnbM6xjTLRa9z0wEDGd_Q4lTXVLRKIQ6_YLGLjf-t7Nto4lzxAeWF-RrwA0Mq9OPITlJq2Jk1Eg_0Utb13d9Dg - type: f1 value: 0.4942810999491704 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U3NGM1MDM4YTM4NzQxMGM4ZTIyZDM2YTQ1MGNlZWM1MzEzM2MxN2ZmZmRmYTM0OWJmZGJjYjM5OWEzMmZjNSIsInZlcnNpb24iOjF9.vMtge1F-tmMn9D3aVUuwcNEXjqpNgEyHAl9f5UDSoTYcOgTwi2vi5yRGRCl8y6Fx7BtgaCwMyoZVNbP5-GRtCA - type: f1 value: 0.495 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjBjMTQ5MmQ5OGE5OWJjZGMyNzg4N2RmNDUzMzQ5Zjc4ZTc4N2JlMTk0MTc2M2RjZTgzOTNlYWQzODAwNDI0NCIsInZlcnNpb24iOjF9.yxXG0CNWW8__xJC14BjbTY9QkXD75x6uCIXR51oKDemkP0b_xGyd-A2wPIuwNJN1EYkQevPY0bhVpRWBKyO9Bg - type: f1 value: 0.4944671868893595 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzczNjQzY2FmMmY4NTAwYjNkYjJlN2I2NjI2Yjc0ZmQ3NjZiN2U5YWEwYjk4OTUyOTMzZTYyZjYzOTMzZGU2YiIsInZlcnNpb24iOjF9.mLOnst2ScPX7ZQwaUF12W2nv7-w9lX9-BxHl3-0T0gkSWnmtBSwYcL5faTX0_I5q33Fjz5tfkjpCJuxP5JYIBQ - type: loss value: 1.8788293600082397 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzRlOTYwYjU1Y2Y4ZGM0NDBjYTE2MmEzNWIwN2NiMWVkOWZlNzA2ZmQ3YjZjNzI4MjQwYWZhODIwMzU3ODAyZiIsInZlcnNpb24iOjF9._Xs9bl48MSavvp5eyamrP2iNlFWv35QZCrmWjJXLkUdIBx0ElCjEdxBb3dxPGnUxdpDzGMmOoKCPI44ZPXrtDw - task: type: natural-language-inference name: Natural Language Inference dataset: name: anli type: anli config: plain_text split: test_r1 metrics: - type: accuracy value: 0.712 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWYxMGY0ZWU0YTEyY2I3NmQwZmQ3YmFmNzQxNGU5OGNjN2ViN2I0ZjdkYWUzM2RmYzkzMDg3ZjVmNGYwNGZkZCIsInZlcnNpb24iOjF9.snWBusAeo1rrQqWk--vTxb-CBcFqM298YCtwTQGBZiFegKGSTSKzj-SM6HMNsmoQWmMuv7UfYPqYlnzEthOSAg - type: precision value: 0.7134839439315348 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjMxMjg1Y2QwNzMwM2ZkNGM3ZTJhOGJmY2FkNGI1ZTFhOGQ3ODViNTJmZTYwMWJkZDYyYWRjMzFmZDI1NTM5YSIsInZlcnNpb24iOjF9.ZJnY6zYOBn-YEtN7uKzQ-VKXPwlIO1zq19Yuo37vBJNSs1dGDd8f1jgfdZuA19e_wA3Nc5nQKe9VXRwPHPgwAQ - type: precision value: 0.712 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWM4YWQyODBlYTIwMWQxZDA1NmY1M2M2ODgwNDJiY2RhMDVhYTlkMDUzZTJkMThkYzRmNDg2YTdjMjczNGUwOCIsInZlcnNpb24iOjF9.SogsKHdbdlEs05IBYwXvlnaC_esg-DXAPc2KPRyHaVC5ItVHbxa63NpybSpao4baOoMlLG9aRe7TjG4gtB2dAQ - type: precision value: 0.7134676028447461 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODdjMzFkM2IwNWZiM2I4ZWViMmQ4NWM5MDY5ZWQxZjc1MGRmNjhmNzJhYWFmOWEwMjg3ZjhiZWM3YjlhOTIxNSIsInZlcnNpb24iOjF9._0JNIbiqLuDZrp_vrCljBe28xexZJPmigLyhkcO8AtH2VcNxWshwCpZuRF4bqvpMvnApJeuGMf3vXjCj0MC1Bw - type: recall value: 0.7119814425203647 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjU4MWEyMzkyYzg1ZTIxMTc0M2NhMTgzOGEyZmY5OTg3M2Q1ZmMwNmU3ZmU1ZjA1MDk0OGZkMzM5NDVlZjBlNSIsInZlcnNpb24iOjF9.sZ3GTcmGGthpTLL7_Zovq8aBmE3Dp_PZi5v8ZI9yG9N6B_GjWvBuPC8ENXK1NwmwiHLsSvtKTG5JmAum-su0Dg - type: recall value: 0.712 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDg3NGViZTlmMWM2ZDNhMzIzZGZkYWZhODQxNzg2MjNiNjQ0Zjg0NjQ1OWZkY2I5ODdiY2Y3Y2JjNzRmYjJkMiIsInZlcnNpb24iOjF9.bCZUzJamsozKWehnNph6E5coww5zZTrJdbWevWrSyfT0PyXc_wkZ-NKdyBAoqprBz3_8L3i5hPM6Qsy56b4BDA - type: recall value: 0.712 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDk1MDJiOGUzZThlZjJjMzY4NjMzODFiZjUzZmIwMjIxY2UwNzBiN2IxMWEwMGJjZTkxODA0YzUxZDE3ODRhOCIsInZlcnNpb24iOjF9.z0dqvB3aBVYt3xRIb_M4svWebfQc0QaDFVFzHnlA5QGEHkHOW3OecGhHE4EzBqTDI3DASWZTGMjrMDDt0uOMBw - type: f1 value: 0.7119226991285647 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2U0YjMwNzhmOTEyNDZhODU3MTU0YTM4MmQ0NzEzNWI1YjY0ZWQ3MWRiMTdiNTUzNWRkZThjMWE4M2NkZmI0MiIsInZlcnNpb24iOjF9.hhj1BXkuWi9wXrCjT9NwqaPETtOoYNiyqYsJEw-ufA8A4hVThKA6ZBtma1Q_M65-DZFfPEBDBNASLZ7EPSbmDw - type: f1 value: 0.712 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODk0Y2EyMzc5M2ZlNWFlNDg2Zjc1OTQxNGY3YjA5YjUxYTYzZjRlZmU4ODYxNjA3ZjkxNGUzYjBmNmMxMzY5YiIsInZlcnNpb24iOjF9.DvKk-3hNh2LhN2ug5e0FgUntL3Ozdfl06Kz7jvmB-deOJH6INi2a2ZySXoEePoo8t2nR6ENFYu9QjMA2ojnpCA - type: f1 value: 0.7119242267218338 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2MxOWFlMmI2NGRiMjkwN2Q5MWZhNDFlYzQxNWNmNzQ3OWYxZThmNDU2OWU1MTE5OGY2MWRlYWUyNDM3OTkzZCIsInZlcnNpb24iOjF9.QrTD1gE8_wRok9u59W-Mx0cX89K-h2Ad6qa8J5rmP8lc_rkG0ft2n5_GqH1CBZBJwMFYv91Pn6TuE3eGxJuUDA - type: loss value: 1.0105403661727905 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmUwMTg4NjM3ZTBiZTIyODcyNDNmNTE5ZDZhMzNkMDMyNjcwOGQ5NmY0NTlhMjgyNmIzZjRiNDFiNjA3M2RkZSIsInZlcnNpb24iOjF9.sjBDVJV-jnygwcppmByAXpoo-Wzz178bBzozJEuYEiJaHSbk_xEevfJS1PmLUuplYslKb1iyEctnjI-5bl-XDw - task: type: natural-language-inference name: Natural Language Inference dataset: name: multi_nli type: multi_nli config: default split: validation_mismatched metrics: - type: accuracy value: 0.902766476810415 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjExZWM3YzA3ZDNlNjEwMmViNWEwZTE3MjJjNjEyNDhjOTQxNGFmMzBjZTk0ODUwYTc2OGNiZjYyMTBmNWZjZSIsInZlcnNpb24iOjF9.zbFAGrv2flpmweqS7Poxib7qHFLdW8eUTzshdOm2B9H-KWpIZCWC-P4p8TLMdNJnUcZJZ03Okil4qjIMqqIRCA - type: precision value: 0.9023816542652491 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U2MGViNmJjNWQxNzRjOTkxNDIxZjZjNmM5YzE4ZjU5NTE5NjFlNmEzZWRlOGYxN2E3NTAwMTEwYjNhNzE0YSIsInZlcnNpb24iOjF9.WJjDJf56FROvf7Y5ShWnnxMvK_ZpQ2PibAOtSFhSiYJ7bt4TGOzMwaZ5RSTf_mcfXgRfWbXmy1jCwNhDb-5EAw - type: precision value: 0.902766476810415 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzRhZTExOTc5NDczZjI1YmMzOGYyOTU2MDU1OGE5ZTczMDE0MmU0NzZhY2YzMDI1ZGQ3MGM5MmJiODFkNzUzZiIsInZlcnNpb24iOjF9.aRYcGEI1Y8-a0d8XOoXhBgsFyj9LWNwEjoIPc594y7kJn91wXIsXoR0-_0iy3uz41mWaTTlwJx7lI-kipFDvDQ - type: precision value: 0.9034597464719761 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWQyMTZiZDA2OTUwZjRmNTFiMWRlZTNmOTliZmI2MWFmMjdjYzEyYTgwNzkyOTQzOTBmNTUyYjMwNTUxMTFkNiIsInZlcnNpb24iOjF9.hUtAMTl0THHUkaLcgk1Vy9IhjqJAXCJ_5STJ5A7k7s_SO9DHp3b6qusgwPmcGLYyPy1-j1dB2AIstxK4tHfmDA - type: recall value: 0.9024304801555488 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzAxZGJhNGI3ZDNlMjg2ZDIxNTgwMDY5MTFjM2ExZmIxMDBmZjUyNTliNWNkOGI0OTY3NTYyNWU3OWFlYTA3YiIsInZlcnNpb24iOjF9.1o_GNq8zmXa_50MUF_K63IDc2aUKNeUkNQ5fT592-SAo8WgiaP9Dh6bOEu2OqrpRQ57P4qm7OdJt7UKsrosMDA - type: recall value: 0.902766476810415 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjhiMWE4Yjk0ODFkZjlkYjRlMjU1OTJmMjA2Njg1N2M4MzQ0OWE3N2FlYjY4NDgxZThjMmExYWQ5OGNmYmI1NSIsInZlcnNpb24iOjF9.Gmm5lf_qpxjXWWrycDze7LHR-6WGQc62WZTmcoc5uxWd0tivEUqCAFzFdbEU1jVKxQBIyDX77CPuBm7mUA4sCg - type: recall value: 0.902766476810415 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2EzZWYwNjNkYWE1YTcyZGZjNTNhMmNlNzgzYjk5MGJjOWJmZmE5NmYwM2U2NTA5ZDY3ZjFiMmRmZmQwY2QwYiIsInZlcnNpb24iOjF9.yA68rslg3e9kUR3rFTNJJTAad6Usr4uFmJvE_a7G2IvSKqLxG_pqsHszsWfg5mFBQLjWEAyCtdQYMdVayuYMBA - type: f1 value: 0.9023086094638595 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzMyMzZhNjI5MWRmZWJhMjkzN2E0MjM4ZTM5YzZmNTk5YTZmYzU4NDRiYjczZGQ4MDdhNjJiMGU0MjE3NDEwNyIsInZlcnNpb24iOjF9.RCMqH_xUMN97Vos54pTFfAMbLstXUMdFTs-eNaypbDb_Fc-MW8NLmJ6dzJsp9sSvhXyYjugjRMUpMpnQseKXDA - type: f1 value: 0.902766476810415 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTYxZTZhZGM0NThlNTAzNmYwMTA4NDNkN2FiNzhhN2RlYThlYjcxMjE5MjBkMzhiOGYxZGRmMjE0NGM2ZWQ5ZSIsInZlcnNpb24iOjF9.wRfllNw2Gibmi1keU7d_GjkyO0F9HESCgJlJ9PHGZQRRT414nnB-DyRvulHjCNnaNjXqMi0LJimC3iBrNawwAw - type: f1 value: 0.9030161011457231 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDA0YjAxMWU5MjI4MWEzNTNjMzJlNjM3ZDMxOTE0ZTZhYmZlNmUyNDViNTU2NmMyMmM3MjAxZWVjNWJmZjI4MCIsInZlcnNpb24iOjF9.vJ8aUjfTbFMc1BgNUVpoVDuYwQJYQjwZQxblkUdvSoGtkW_AzQJ_KJ8Njc7IBA3ADgj8iZHjRQNIZkFCf-xICw - type: loss value: 0.3283354640007019 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODdmYzYzNTUzZDNmOWIxM2E0ZmUyOWUzM2Y2NGRmZDNiYjg3ZTMzYTUyNzg3OWEzNzYyN2IyNmExOGRlMWUxYSIsInZlcnNpb24iOjF9.Qv0FzFZPkcBs9aHGf4TEREX4jdkc40NazdMlP2M_-w2wHwyjoAjvhk611RLXHcbicozNelZJLnsOMdEMnPLEDg - task: type: natural-language-inference name: Natural Language Inference dataset: name: anli type: anli config: plain_text split: dev_r1 metrics: - type: accuracy value: 0.737 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ1ZGVkOTVmNTlhYjhkMjVlNTNhMjNmZWFjZWZjZjcxZmRhMDVlOWI0YTdkOTMwYjVjNWFlOGY4OTc1MmRhNiIsInZlcnNpb24iOjF9.wGLgKA1E46ljbLokdPeip_UCr1gqK8iSSbsJKX2vgKuuhDdUWWiECrUFN-bv_78JWKoKW5T0GF_hb-RVDzA0AQ - type: precision value: 0.737681071614645 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmFkMGUwMjNhN2E3NzMxNTc5NDM0MjY1MGU5ODllM2Q2YzA1MDI3OGI1ZmI4YTcxN2E4ZDk5OWY2OGNiN2I0MCIsInZlcnNpb24iOjF9.6G5qhccjheaNfasgRyrkKBTaQPRzuPMZZ0hrLxTNzAydMDgx09FkFP3hni7WLRMWp0IpwzkEeBlxV-mPyQBtBw - type: precision value: 0.737 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2QzYjQ4ZDZjOGU5YzI3YmFlMThlYTRkYTUyYWIyNzc4NDkwNzM1OWFiMTgyMzA0NDZmMGI3YTQxODBjM2EwMCIsInZlcnNpb24iOjF9.bvNWyzfct1CLJFx_EuD2GeKieVtyGJy0cwUBP2qJE1ey2i9SVn6n1Dr0AALTGBkxQ6n5-fJ61QFNufpdr2KvCA - type: precision value: 0.7376755842752241 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2VmYWYzZWQwZmMzMDk0NTdlY2Y3NDkzYWY5ZTdmOGU0ZTUzZWE4YWFhZjVmODhkZmE1Njg4NjA5YjJmYWVhOSIsInZlcnNpb24iOjF9.50FQR2aoBpORLgYa7482ZTrRhT-KfIgv5ltBEHndUBMmqGF9Ru0LHENSGwyD_tO89sGPfiW32TxpbrNWiBdIBA - type: recall value: 0.7369675064285843 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTM4OTAyNDYwNjY4Zjc5NDljNjBmNTg2Mzk4YjYxM2MyYTA0MDllYTMyNzEwOGI1ZTEwYWE3ZmU0NDZmZDg2NiIsInZlcnNpb24iOjF9.UvWBxuApNV3vd4hpgwqd6XPHCbkA_bB_Cw24ooquiOf0dstvjP3JvpGoDp5SniOzIOg3i2aYbcvFCLJqEXMZCQ - type: recall value: 0.737 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmQ4MjMzNzRmNTI5NjIzNGQ0ZDFmZTA1MDU3OTk0MzYyMGI0NTMzZTZlMTQ1MDc1MzBkMGMzYjcxZjU1NDNjOSIsInZlcnNpb24iOjF9.kpbdXOpDG3CUB-kUEXsgFT3HWWIbu70wwzs2TNf0rhIuRrzdZz3dXXvwqu1BcLJTsOxl8G6NTiYXgnv-ul8lDg - type: recall value: 0.737 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmU1ZWJkNWE0NjczY2NiZWYyNzYyMzllNzZmZTIxNWRkYTEyZDgxN2E0NTNmM2ExMTc1ZWVjMzBiYjg0ZmM1MiIsInZlcnNpb24iOjF9.S6HHWCWnut_LJqXbEA_Z8ZOTtyq6V51ZeiA0qbwzr0hapDYZOZHrN4prvSLvoNv-GiYDYKatwIsAZxCZc5fmCA - type: f1 value: 0.7366853496239583 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkxYmY2NTcyOTE0ZDdjNGY2ZmE4MzQwMGIxZTA2MDg1NzI5YTQ0MTdkZjdkNzNkMDM2NTk2MTNiNjU4ODMwZCIsInZlcnNpb24iOjF9.ECVaCBqGd0pnQT3xJF7yWrgecIb-5TMiVWpEO0MQGhYy43snkI6Qs-2FOXzvfwIWqG-Q6XIIhGbWZh5TFEGKCA - type: f1 value: 0.737 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDMwMWZiNzQyNWEzNmMzMDJjOTAxYzAxNzc0MTNlYzRkZjllYmNjZmU0OTgzZDFkNWM1ZWI5OTA2NzE5Y2YxOSIsInZlcnNpb24iOjF9.8yZFol_Gcj9n3w9Yk5wx48yql7p3wriDecv-6VSTAB6Q_MWLQAWsCEGRRhgGJ3zvhoRehJZdb35ozk36VOinDQ - type: f1 value: 0.7366990292378379 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhhN2ZkMjc5ZGQ3ZGM1Nzk3ZTgwY2E1N2NjYjdhNjZlOTdhYmRlNGVjN2EwNTIzN2UyYTY2ODVlODhmY2Q4ZCIsInZlcnNpb24iOjF9.Cz7ClDAfCGpqdRTYd5v3dPjXFq8lZLXx8AX_rqmF-Jb8KocqVDsHWeZScW5I2oy951UrdMpiUOLieBuJLOmCCQ - type: loss value: 0.9349392056465149 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmI4MTI5MDM1NjBmMzgzMzc2NjM5MzZhOGUyNTgyY2RlZTEyYTIzYzY2ZGJmODcxY2Q5OTVjOWU3OTQ2MzM1NSIsInZlcnNpb24iOjF9.bSOFnYC4Y2y2pW1AR-bgPUHKafR-0OHf8PvexK8eQLsS323Xy9-rYkKUaP09KY6_fk9GqAawv5eqj72B_uyeCA --- # DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli). The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ### How to use the model #### Simple zero-shot classification pipeline ```python #!pip install transformers[sentencepiece] from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy. mnli-m | mnli-mm | fever-nli | anli-all | anli-r3 ---------|----------|---------|----------|---------- 0.903 | 0.903 | 0.777 | 0.579 | 0.495 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. Also make sure to install sentencepiece to avoid tokenizer errors. Run: `pip install transformers[sentencepiece]` or `pip install sentencepiece` ## Model Recycling [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.65&mnli_lp=nan&20_newsgroup=-0.61&ag_news=-0.01&amazon_reviews_multi=0.46&anli=0.84&boolq=2.12&cb=16.07&cola=-0.76&copa=8.60&dbpedia=-0.40&esnli=-0.29&financial_phrasebank=-1.98&imdb=-0.47&isear=-0.22&mnli=-0.21&mrpc=0.50&multirc=1.91&poem_sentiment=1.73&qnli=0.07&qqp=-0.37&rotten_tomatoes=-0.74&rte=3.94&sst2=-0.45&sst_5bins=0.07&stsb=1.27&trec_coarse=-0.16&trec_fine=0.18&tweet_ev_emoji=-0.93&tweet_ev_emotion=-1.33&tweet_ev_hate=-1.67&tweet_ev_irony=-5.46&tweet_ev_offensive=-0.17&tweet_ev_sentiment=-0.11&wic=-0.21&wnli=-1.20&wsc=4.18&yahoo_answers=-0.70&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli-fever-anli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|-------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|-------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 85.8072 | 90.4333 | 67.32 | 59.625 | 85.107 | 91.0714 | 85.8102 | 67 | 79.0333 | 91.6327 | 82.5 | 94.02 | 71.6428 | 89.5749 | 89.7059 | 64.1708 | 88.4615 | 93.575 | 91.4148 | 89.6811 | 86.2816 | 94.6101 | 57.0588 | 91.5508 | 97.6 | 91.2 | 45.264 | 82.6179 | 54.5455 | 74.3622 | 84.8837 | 71.6949 | 71.0031 | 69.0141 | 68.2692 | 71.3333 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
intfloat/multilingual-e5-small
intfloat
"2024-07-29T02:00:50Z"
831,658
149
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-06-30T07:31:03Z"
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit model-index: - name: intfloat/multilingual-e5-small results: - dataset: config: en name: MTEB AmazonCounterfactualClassification (en) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 36.9996434842022 - type: f1 value: 67.95453679103099 task: type: Classification - dataset: config: de name: MTEB AmazonCounterfactualClassification (de) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 71.64882226980728 - type: ap value: 82.11942130026586 - type: f1 value: 69.87963421606715 task: type: Classification - dataset: config: en-ext name: MTEB AmazonCounterfactualClassification (en-ext) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 75.8095952023988 - type: ap value: 24.46869495579561 - type: f1 value: 63.00108480037597 task: type: Classification - dataset: config: ja name: MTEB AmazonCounterfactualClassification (ja) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 64.186295503212 - type: ap value: 15.496804690197042 - type: f1 value: 52.07153895475031 task: type: Classification - dataset: config: default name: MTEB AmazonPolarityClassification revision: e2d317d38cd51312af73b3d32a06d1a08b442046 split: test type: mteb/amazon_polarity metrics: - type: accuracy value: 88.699325 - type: ap value: 85.27039559917269 - type: f1 value: 88.65556295032513 task: type: Classification - dataset: config: en name: MTEB AmazonReviewsClassification (en) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 44.69799999999999 - type: f1 value: 43.73187348654165 task: type: Classification - dataset: config: de name: MTEB AmazonReviewsClassification (de) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 40.245999999999995 - type: f1 value: 39.3863530637684 task: type: Classification - dataset: config: es name: MTEB AmazonReviewsClassification (es) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 40.394 - type: f1 value: 39.301223469483446 task: type: Classification - dataset: config: fr name: MTEB AmazonReviewsClassification (fr) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 38.864 - type: f1 value: 37.97974261868003 task: type: Classification - dataset: config: ja name: MTEB AmazonReviewsClassification (ja) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 37.682 - type: f1 value: 37.07399369768313 task: type: Classification - dataset: config: zh name: MTEB AmazonReviewsClassification (zh) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 37.504 - type: f1 value: 36.62317273874278 task: type: Classification - dataset: config: default name: MTEB ArguAna revision: None split: test type: arguana metrics: - type: map_at_1 value: 19.061 - type: map_at_10 value: 31.703 - type: map_at_100 value: 32.967 - type: map_at_1000 value: 33.001000000000005 - type: map_at_3 value: 27.466 - type: map_at_5 value: 29.564 - type: mrr_at_1 value: 19.559 - type: mrr_at_10 value: 31.874999999999996 - type: mrr_at_100 value: 33.146 - type: mrr_at_1000 value: 33.18 - type: mrr_at_3 value: 27.667 - type: mrr_at_5 value: 29.74 - type: ndcg_at_1 value: 19.061 - type: ndcg_at_10 value: 39.062999999999995 - type: ndcg_at_100 value: 45.184000000000005 - type: ndcg_at_1000 value: 46.115 - type: ndcg_at_3 value: 30.203000000000003 - type: ndcg_at_5 value: 33.953 - type: precision_at_1 value: 19.061 - type: precision_at_10 value: 6.279999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 12.706999999999999 - type: precision_at_5 value: 9.431000000000001 - type: recall_at_1 value: 19.061 - type: recall_at_10 value: 62.802 - type: recall_at_100 value: 91.323 - type: recall_at_1000 value: 98.72 - type: recall_at_3 value: 38.122 - type: recall_at_5 value: 47.155 task: type: Retrieval - dataset: config: default name: MTEB ArxivClusteringP2P revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d split: test type: mteb/arxiv-clustering-p2p metrics: - type: v_measure value: 39.22266660528253 task: type: Clustering - dataset: config: default name: MTEB ArxivClusteringS2S revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 split: test type: mteb/arxiv-clustering-s2s metrics: - type: v_measure value: 30.79980849482483 task: type: Clustering - dataset: config: default name: MTEB AskUbuntuDupQuestions revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 split: test type: mteb/askubuntudupquestions-reranking metrics: - type: map value: 57.8790068352054 - type: mrr value: 71.78791276436706 task: type: Reranking - dataset: config: default name: MTEB BIOSSES revision: d3fb88f8f02e40887cd149695127462bbcf29b4a split: test type: mteb/biosses-sts metrics: - type: cos_sim_pearson value: 82.36328364043163 - type: cos_sim_spearman value: 82.26211536195868 - type: euclidean_pearson value: 80.3183865039173 - type: euclidean_spearman value: 79.88495276296132 - type: manhattan_pearson value: 80.14484480692127 - type: manhattan_spearman value: 80.39279565980743 task: type: STS - dataset: config: de-en name: MTEB BUCC (de-en) revision: d51519689f32196a32af33b075a01d0e7c51e252 split: test type: mteb/bucc-bitext-mining metrics: - type: accuracy value: 98.0375782881002 - type: f1 value: 97.86012526096033 - type: precision value: 97.77139874739039 - type: recall value: 98.0375782881002 task: type: BitextMining - dataset: config: fr-en name: MTEB BUCC (fr-en) revision: d51519689f32196a32af33b075a01d0e7c51e252 split: test type: mteb/bucc-bitext-mining metrics: - type: accuracy value: 93.35241030156286 - type: f1 value: 92.66050333846944 - type: precision value: 92.3306919069631 - type: recall value: 93.35241030156286 task: type: BitextMining - dataset: config: ru-en name: MTEB BUCC (ru-en) revision: d51519689f32196a32af33b075a01d0e7c51e252 split: test type: mteb/bucc-bitext-mining metrics: - type: accuracy value: 94.0699688257707 - type: f1 value: 93.50236693222492 - type: precision value: 93.22791825424315 - type: recall value: 94.0699688257707 task: type: BitextMining - dataset: config: zh-en name: MTEB BUCC (zh-en) revision: d51519689f32196a32af33b075a01d0e7c51e252 split: test type: mteb/bucc-bitext-mining metrics: - type: accuracy value: 89.25750394944708 - type: f1 value: 88.79234684921889 - type: precision value: 88.57293312269616 - type: recall value: 89.25750394944708 task: type: BitextMining - dataset: config: default name: MTEB Banking77Classification revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 split: test type: mteb/banking77 metrics: - type: accuracy value: 79.41558441558442 - type: f1 value: 79.25886487487219 task: type: Classification - dataset: config: default name: MTEB BiorxivClusteringP2P revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 split: test type: mteb/biorxiv-clustering-p2p metrics: - type: v_measure value: 35.747820820329736 task: type: Clustering - dataset: config: default name: MTEB BiorxivClusteringS2S revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 split: test type: mteb/biorxiv-clustering-s2s metrics: - type: v_measure value: 27.045143830596146 task: type: Clustering - dataset: config: default name: MTEB CQADupstackRetrieval revision: None split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 24.252999999999997 - type: map_at_10 value: 31.655916666666666 - type: map_at_100 value: 32.680749999999996 - type: map_at_1000 value: 32.79483333333334 - type: map_at_3 value: 29.43691666666666 - type: map_at_5 value: 30.717416666666665 - type: mrr_at_1 value: 28.602750000000004 - type: mrr_at_10 value: 35.56875 - type: mrr_at_100 value: 36.3595 - type: mrr_at_1000 value: 36.427749999999996 - type: mrr_at_3 value: 33.586166666666664 - type: mrr_at_5 value: 34.73641666666666 - type: ndcg_at_1 value: 28.602750000000004 - type: ndcg_at_10 value: 36.06933333333334 - type: ndcg_at_100 value: 40.70141666666667 - type: ndcg_at_1000 value: 43.24341666666667 - type: ndcg_at_3 value: 32.307916666666664 - type: ndcg_at_5 value: 34.129999999999995 - type: precision_at_1 value: 28.602750000000004 - type: precision_at_10 value: 6.097666666666667 - type: precision_at_100 value: 0.9809166666666668 - type: precision_at_1000 value: 0.13766666666666663 - type: precision_at_3 value: 14.628166666666667 - type: precision_at_5 value: 10.266916666666667 - type: recall_at_1 value: 24.252999999999997 - type: recall_at_10 value: 45.31916666666667 - type: recall_at_100 value: 66.03575000000001 - type: recall_at_1000 value: 83.94708333333334 - type: recall_at_3 value: 34.71941666666666 - type: recall_at_5 value: 39.46358333333333 task: type: Retrieval - dataset: config: default name: MTEB ClimateFEVER revision: None split: test type: climate-fever metrics: - type: map_at_1 value: 9.024000000000001 - type: map_at_10 value: 15.644 - type: map_at_100 value: 17.154 - type: map_at_1000 value: 17.345 - type: map_at_3 value: 13.028 - type: map_at_5 value: 14.251 - type: mrr_at_1 value: 19.674 - type: mrr_at_10 value: 29.826999999999998 - type: mrr_at_100 value: 30.935000000000002 - type: mrr_at_1000 value: 30.987 - type: mrr_at_3 value: 26.645000000000003 - type: mrr_at_5 value: 28.29 - type: ndcg_at_1 value: 19.674 - type: ndcg_at_10 value: 22.545 - type: ndcg_at_100 value: 29.207 - type: ndcg_at_1000 value: 32.912 - type: ndcg_at_3 value: 17.952 - type: ndcg_at_5 value: 19.363 - type: precision_at_1 value: 19.674 - type: precision_at_10 value: 7.212000000000001 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.212 - type: precision_at_3 value: 13.507 - type: precision_at_5 value: 10.397 - type: recall_at_1 value: 9.024000000000001 - type: recall_at_10 value: 28.077999999999996 - type: recall_at_100 value: 51.403 - type: recall_at_1000 value: 72.406 - type: recall_at_3 value: 16.768 - type: recall_at_5 value: 20.737 task: type: Retrieval - dataset: config: default name: MTEB DBPedia revision: None split: test type: dbpedia-entity metrics: - type: map_at_1 value: 8.012 - type: map_at_10 value: 17.138 - type: map_at_100 value: 24.146 - type: map_at_1000 value: 25.622 - type: map_at_3 value: 12.552 - type: map_at_5 value: 14.435 - type: mrr_at_1 value: 62.25000000000001 - type: mrr_at_10 value: 71.186 - type: mrr_at_100 value: 71.504 - type: mrr_at_1000 value: 71.514 - type: mrr_at_3 value: 69.333 - type: mrr_at_5 value: 70.408 - type: ndcg_at_1 value: 49.75 - type: ndcg_at_10 value: 37.76 - type: ndcg_at_100 value: 42.071 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 41.644 - type: ndcg_at_5 value: 39.812999999999995 - type: precision_at_1 value: 62.25000000000001 - type: precision_at_10 value: 30.15 - type: precision_at_100 value: 9.753 - type: precision_at_1000 value: 1.9189999999999998 - type: precision_at_3 value: 45.667 - type: precision_at_5 value: 39.15 - type: recall_at_1 value: 8.012 - type: recall_at_10 value: 22.599 - type: recall_at_100 value: 48.068 - type: recall_at_1000 value: 71.328 - type: recall_at_3 value: 14.043 - type: recall_at_5 value: 17.124 task: type: Retrieval - dataset: config: default name: MTEB EmotionClassification revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 split: test type: mteb/emotion metrics: - type: accuracy value: 42.455 - type: f1 value: 37.59462649781862 task: type: Classification - dataset: config: default name: MTEB FEVER revision: None split: test type: fever metrics: - type: map_at_1 value: 58.092 - type: map_at_10 value: 69.586 - type: map_at_100 value: 69.968 - type: map_at_1000 value: 69.982 - type: map_at_3 value: 67.48100000000001 - type: map_at_5 value: 68.915 - type: mrr_at_1 value: 62.166 - type: mrr_at_10 value: 73.588 - type: mrr_at_100 value: 73.86399999999999 - type: mrr_at_1000 value: 73.868 - type: mrr_at_3 value: 71.6 - type: mrr_at_5 value: 72.99 - type: ndcg_at_1 value: 62.166 - type: ndcg_at_10 value: 75.27199999999999 - type: ndcg_at_100 value: 76.816 - type: ndcg_at_1000 value: 77.09700000000001 - type: ndcg_at_3 value: 71.36 - type: ndcg_at_5 value: 73.785 - type: precision_at_1 value: 62.166 - type: precision_at_10 value: 9.716 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 28.278 - type: precision_at_5 value: 18.343999999999998 - type: recall_at_1 value: 58.092 - type: recall_at_10 value: 88.73400000000001 - type: recall_at_100 value: 95.195 - type: recall_at_1000 value: 97.04599999999999 - type: recall_at_3 value: 78.45 - type: recall_at_5 value: 84.316 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 revision: None split: test type: fiqa metrics: - type: map_at_1 value: 16.649 - type: map_at_10 value: 26.457000000000004 - type: map_at_100 value: 28.169 - type: map_at_1000 value: 28.352 - type: map_at_3 value: 23.305 - type: map_at_5 value: 25.169000000000004 - type: mrr_at_1 value: 32.407000000000004 - type: mrr_at_10 value: 40.922 - type: mrr_at_100 value: 41.931000000000004 - type: mrr_at_1000 value: 41.983 - type: mrr_at_3 value: 38.786 - type: mrr_at_5 value: 40.205999999999996 - type: ndcg_at_1 value: 32.407000000000004 - type: ndcg_at_10 value: 33.314 - type: ndcg_at_100 value: 40.312 - type: ndcg_at_1000 value: 43.685 - type: ndcg_at_3 value: 30.391000000000002 - type: ndcg_at_5 value: 31.525 - type: precision_at_1 value: 32.407000000000004 - type: precision_at_10 value: 8.966000000000001 - type: precision_at_100 value: 1.6019999999999999 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 14.722 - type: recall_at_1 value: 16.649 - type: recall_at_10 value: 39.117000000000004 - type: recall_at_100 value: 65.726 - type: recall_at_1000 value: 85.784 - type: recall_at_3 value: 27.914 - type: recall_at_5 value: 33.289 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA revision: None split: test type: hotpotqa metrics: - type: map_at_1 value: 36.253 - type: map_at_10 value: 56.16799999999999 - type: map_at_100 value: 57.06099999999999 - type: map_at_1000 value: 57.126 - type: map_at_3 value: 52.644999999999996 - type: map_at_5 value: 54.909 - type: mrr_at_1 value: 72.505 - type: mrr_at_10 value: 79.66 - type: mrr_at_100 value: 79.869 - type: mrr_at_1000 value: 79.88 - type: mrr_at_3 value: 78.411 - type: mrr_at_5 value: 79.19800000000001 - type: ndcg_at_1 value: 72.505 - type: ndcg_at_10 value: 65.094 - type: ndcg_at_100 value: 68.219 - type: ndcg_at_1000 value: 69.515 - type: ndcg_at_3 value: 59.99 - type: ndcg_at_5 value: 62.909000000000006 - type: precision_at_1 value: 72.505 - type: precision_at_10 value: 13.749 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 38.357 - type: precision_at_5 value: 25.313000000000002 - type: recall_at_1 value: 36.253 - type: recall_at_10 value: 68.744 - type: recall_at_100 value: 80.925 - type: recall_at_1000 value: 89.534 - type: recall_at_3 value: 57.535000000000004 - type: recall_at_5 value: 63.282000000000004 task: type: Retrieval - dataset: config: default name: MTEB ImdbClassification revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 split: test type: mteb/imdb metrics: - type: accuracy value: 80.82239999999999 - type: ap value: 75.65895781725314 - type: f1 value: 80.75880969095746 task: type: Classification - dataset: config: default name: MTEB MSMARCO revision: None split: dev type: msmarco metrics: - type: map_at_1 value: 21.624 - type: map_at_10 value: 34.075 - type: map_at_100 value: 35.229 - type: map_at_1000 value: 35.276999999999994 - type: map_at_3 value: 30.245 - type: map_at_5 value: 32.42 - type: mrr_at_1 value: 22.264 - type: mrr_at_10 value: 34.638000000000005 - type: mrr_at_100 value: 35.744 - type: mrr_at_1000 value: 35.787 - type: mrr_at_3 value: 30.891000000000002 - type: mrr_at_5 value: 33.042 - type: ndcg_at_1 value: 22.264 - type: ndcg_at_10 value: 40.991 - type: ndcg_at_100 value: 46.563 - type: ndcg_at_1000 value: 47.743 - type: ndcg_at_3 value: 33.198 - type: ndcg_at_5 value: 37.069 - type: precision_at_1 value: 22.264 - type: precision_at_10 value: 6.5089999999999995 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.216999999999999 - type: precision_at_5 value: 10.487 - type: recall_at_1 value: 21.624 - type: recall_at_10 value: 62.303 - type: recall_at_100 value: 88.124 - type: recall_at_1000 value: 97.08 - type: recall_at_3 value: 41.099999999999994 - type: recall_at_5 value: 50.381 task: type: Retrieval - dataset: config: en name: MTEB MTOPDomainClassification (en) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 91.06703146374831 - type: f1 value: 90.86867815863172 task: type: Classification - dataset: config: de name: MTEB MTOPDomainClassification (de) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 87.46970977740209 - type: f1 value: 86.36832872036588 task: type: Classification - dataset: config: es name: MTEB MTOPDomainClassification (es) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 89.26951300867245 - type: f1 value: 88.93561193959502 task: type: Classification - dataset: config: fr name: MTEB MTOPDomainClassification (fr) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 84.22799874725963 - type: f1 value: 84.30490069236556 task: type: Classification - dataset: config: hi name: MTEB MTOPDomainClassification (hi) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 86.02007888131948 - type: f1 value: 85.39376041027991 task: type: Classification - dataset: config: th name: MTEB MTOPDomainClassification (th) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 85.34900542495481 - type: f1 value: 85.39859673336713 task: type: Classification - dataset: config: en name: MTEB MTOPIntentClassification (en) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 71.078431372549 - type: f1 value: 53.45071102002276 task: type: Classification - dataset: config: de name: MTEB MTOPIntentClassification (de) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 65.85798816568047 - type: f1 value: 46.53112748993529 task: type: Classification - dataset: config: es name: MTEB MTOPIntentClassification (es) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 67.96864576384256 - type: f1 value: 45.966703022829506 task: type: Classification - dataset: config: fr name: MTEB MTOPIntentClassification (fr) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 61.31537738803633 - type: f1 value: 45.52601712835461 task: type: Classification - dataset: config: hi name: MTEB MTOPIntentClassification (hi) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 66.29616349946218 - type: f1 value: 47.24166485726613 task: type: Classification - dataset: config: th name: MTEB MTOPIntentClassification (th) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 67.51537070524412 - type: f1 value: 49.463476319014276 task: type: Classification - dataset: config: af name: MTEB MassiveIntentClassification (af) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.06792199058508 - type: f1 value: 54.094921857502285 task: type: Classification - dataset: config: am name: MTEB MassiveIntentClassification (am) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 51.960322797579025 - type: f1 value: 48.547371223370945 task: type: Classification - dataset: config: ar name: MTEB MassiveIntentClassification (ar) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 54.425016812373904 - type: f1 value: 50.47069202054312 task: type: Classification - dataset: config: az name: MTEB MassiveIntentClassification (az) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 59.798251513113655 - type: f1 value: 57.05013069086648 task: type: Classification - dataset: config: bn name: MTEB MassiveIntentClassification (bn) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 59.37794216543376 - type: f1 value: 56.3607992649805 task: type: Classification - dataset: config: cy name: MTEB MassiveIntentClassification (cy) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 46.56018829858777 - type: f1 value: 43.87319715715134 task: type: Classification - dataset: config: da name: MTEB MassiveIntentClassification (da) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 62.9724277067922 - type: f1 value: 59.36480066245562 task: type: Classification - dataset: config: de name: MTEB MassiveIntentClassification (de) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 62.72696704774715 - type: f1 value: 59.143595966615855 task: type: Classification - dataset: config: el name: MTEB MassiveIntentClassification (el) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 61.5971755211836 - type: f1 value: 59.169445724946726 task: type: Classification - dataset: config: en name: MTEB MassiveIntentClassification (en) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 70.29589778076665 - type: f1 value: 67.7577001808977 task: type: Classification - dataset: config: es name: MTEB MassiveIntentClassification (es) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 66.31136516476126 - type: f1 value: 64.52032955983242 task: type: Classification - dataset: config: fa name: MTEB MassiveIntentClassification (fa) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 61.47903120066317 task: type: Classification - dataset: config: fi name: MTEB MassiveIntentClassification (fi) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 61.45595158036314 - type: f1 value: 58.0891846024637 task: type: Classification - dataset: config: fr name: MTEB MassiveIntentClassification (fr) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 65.47074646940149 - type: f1 value: 62.84830858877575 task: type: Classification - dataset: config: he name: MTEB MassiveIntentClassification (he) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 58.046402151983855 - type: f1 value: 55.269074430533195 task: type: Classification - dataset: config: hi name: MTEB MassiveIntentClassification (hi) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 64.06523201075991 - type: f1 value: 61.35339643021369 task: type: Classification - dataset: config: hu name: MTEB MassiveIntentClassification (hu) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 60.954942837928726 - type: f1 value: 57.07035922704846 task: type: Classification - dataset: config: hy name: MTEB MassiveIntentClassification (hy) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.404169468728995 - type: f1 value: 53.94259011839138 task: type: Classification - dataset: config: id name: MTEB MassiveIntentClassification (id) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 64.16610625420309 - type: f1 value: 61.337103431499365 task: type: Classification - dataset: config: is name: MTEB MassiveIntentClassification (is) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 52.262945527908535 - type: f1 value: 49.7610691598921 task: type: Classification - dataset: config: it name: MTEB MassiveIntentClassification (it) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 63.469099018440154 task: type: Classification - dataset: config: ja name: MTEB MassiveIntentClassification (ja) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 68.22797579018157 - type: f1 value: 64.89098471083001 task: type: Classification - dataset: config: jv name: MTEB MassiveIntentClassification (jv) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 50.847343644922674 - type: f1 value: 47.8536963168393 task: type: Classification - dataset: config: ka name: MTEB MassiveIntentClassification (ka) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 48.45326160053799 - type: f1 value: 46.370078045805556 task: type: Classification - dataset: config: km name: MTEB MassiveIntentClassification (km) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 42.83120376597175 - type: f1 value: 39.68948521599982 task: type: Classification - dataset: config: kn name: MTEB MassiveIntentClassification (kn) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.5084061869536 - type: f1 value: 53.961876160401545 task: type: Classification - dataset: config: ko name: MTEB MassiveIntentClassification (ko) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 63.7895090786819 - type: f1 value: 61.134223684676 task: type: Classification - dataset: config: lv name: MTEB MassiveIntentClassification (lv) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 54.98991257565569 - type: f1 value: 52.579862862826296 task: type: Classification - dataset: config: ml name: MTEB MassiveIntentClassification (ml) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 61.90316072629456 - type: f1 value: 58.203024538290336 task: type: Classification - dataset: config: mn name: MTEB MassiveIntentClassification (mn) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.09818426361802 - type: f1 value: 54.22718458445455 task: type: Classification - dataset: config: ms name: MTEB MassiveIntentClassification (ms) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 58.991257565568255 - type: f1 value: 55.84892781767421 task: type: Classification - dataset: config: my name: MTEB MassiveIntentClassification (my) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 55.901143241425686 - type: f1 value: 52.25264332199797 task: type: Classification - dataset: config: nb name: MTEB MassiveIntentClassification (nb) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 61.96368527236047 - type: f1 value: 58.927243876153454 task: type: Classification - dataset: config: nl name: MTEB MassiveIntentClassification (nl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 65.64223268325489 - type: f1 value: 62.340453718379706 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 64.52589105581708 - type: f1 value: 61.661113187022174 task: type: Classification - dataset: config: pt name: MTEB MassiveIntentClassification (pt) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 66.84599865501009 - type: f1 value: 64.59342572873005 task: type: Classification - dataset: config: ro name: MTEB MassiveIntentClassification (ro) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 60.81035642232684 - type: f1 value: 57.5169089806797 task: type: Classification - dataset: config: ru name: MTEB MassiveIntentClassification (ru) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 58.652238071815056 - type: f1 value: 53.22732406426353 - type: f1_weighted value: 57.585586737209546 - type: main_score value: 58.652238071815056 task: type: Classification - dataset: config: sl name: MTEB MassiveIntentClassification (sl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 56.51647612642906 - type: f1 value: 54.33154780100043 task: type: Classification - dataset: config: sq name: MTEB MassiveIntentClassification (sq) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.985877605917956 - type: f1 value: 54.46187524463802 task: type: Classification - dataset: config: sv name: MTEB MassiveIntentClassification (sv) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 65.03026227303296 - type: f1 value: 62.34377392877748 task: type: Classification - dataset: config: sw name: MTEB MassiveIntentClassification (sw) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 53.567585743106925 - type: f1 value: 50.73770655983206 task: type: Classification - dataset: config: ta name: MTEB MassiveIntentClassification (ta) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.2595830531271 - type: f1 value: 53.657327291708626 task: type: Classification - dataset: config: te name: MTEB MassiveIntentClassification (te) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 57.82784129119032 - type: f1 value: 54.82518072665301 task: type: Classification - dataset: config: th name: MTEB MassiveIntentClassification (th) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 64.06859448554137 - type: f1 value: 63.00185280500495 task: type: Classification - dataset: config: tl name: MTEB MassiveIntentClassification (tl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 58.91055817081371 - type: f1 value: 55.54116301224262 task: type: Classification - dataset: config: tr name: MTEB MassiveIntentClassification (tr) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 63.54404841963686 - type: f1 value: 59.57650946030184 task: type: Classification - dataset: config: ur name: MTEB MassiveIntentClassification (ur) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 59.27706792199059 - type: f1 value: 56.50010066083435 task: type: Classification - dataset: config: vi name: MTEB MassiveIntentClassification (vi) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 64.0719569603228 - type: f1 value: 61.817075925647956 task: type: Classification - dataset: config: zh-CN name: MTEB MassiveIntentClassification (zh-CN) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 68.23806321452591 - type: f1 value: 65.24917026029749 task: type: Classification - dataset: config: zh-TW name: MTEB MassiveIntentClassification (zh-TW) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 62.53530598520511 - type: f1 value: 61.71131132295768 task: type: Classification - dataset: config: af name: MTEB MassiveScenarioClassification (af) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 63.04303967720243 - type: f1 value: 60.3950085685985 task: type: Classification - dataset: config: am name: MTEB MassiveScenarioClassification (am) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 56.83591123066578 - type: f1 value: 54.95059828830849 task: type: Classification - dataset: config: ar name: MTEB MassiveScenarioClassification (ar) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 59.62340282447881 - type: f1 value: 59.525159996498225 task: type: Classification - dataset: config: az name: MTEB MassiveScenarioClassification (az) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 60.85406859448555 - type: f1 value: 59.129299095681276 task: type: Classification - dataset: config: bn name: MTEB MassiveScenarioClassification (bn) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 62.76731674512441 - type: f1 value: 61.159560612627715 task: type: Classification - dataset: config: cy name: MTEB MassiveScenarioClassification (cy) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 50.181573638197705 - type: f1 value: 46.98422176289957 task: type: Classification - dataset: config: da name: MTEB MassiveScenarioClassification (da) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.92737054472092 - type: f1 value: 67.69135611952979 task: type: Classification - dataset: config: de name: MTEB MassiveScenarioClassification (de) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 69.18964357767318 - type: f1 value: 68.46106138186214 task: type: Classification - dataset: config: el name: MTEB MassiveScenarioClassification (el) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 67.0712844653665 - type: f1 value: 66.75545422473901 task: type: Classification - dataset: config: en name: MTEB MassiveScenarioClassification (en) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 74.4754539340955 - type: f1 value: 74.38427146553252 task: type: Classification - dataset: config: es name: MTEB MassiveScenarioClassification (es) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 69.82515131136518 - type: f1 value: 69.63516462173847 task: type: Classification - dataset: config: fa name: MTEB MassiveScenarioClassification (fa) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.70880968392737 - type: f1 value: 67.45420662567926 task: type: Classification - dataset: config: fi name: MTEB MassiveScenarioClassification (fi) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 65.95494283792871 - type: f1 value: 65.06191009049222 task: type: Classification - dataset: config: fr name: MTEB MassiveScenarioClassification (fr) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.75924680564896 - type: f1 value: 68.30833379585945 task: type: Classification - dataset: config: he name: MTEB MassiveScenarioClassification (he) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 63.806321452589096 - type: f1 value: 63.273048243765054 task: type: Classification - dataset: config: hi name: MTEB MassiveScenarioClassification (hi) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 67.68997982515133 - type: f1 value: 66.54703855381324 task: type: Classification - dataset: config: hu name: MTEB MassiveScenarioClassification (hu) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 66.46940147948891 - type: f1 value: 65.91017343463396 task: type: Classification - dataset: config: hy name: MTEB MassiveScenarioClassification (hy) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 59.49899125756556 - type: f1 value: 57.90333469917769 task: type: Classification - dataset: config: id name: MTEB MassiveScenarioClassification (id) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 67.9219905850706 - type: f1 value: 67.23169403762938 task: type: Classification - dataset: config: is name: MTEB MassiveScenarioClassification (is) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 56.486213853396094 - type: f1 value: 54.85282355583758 task: type: Classification - dataset: config: it name: MTEB MassiveScenarioClassification (it) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 69.04169468728985 - type: f1 value: 68.83833333320462 task: type: Classification - dataset: config: ja name: MTEB MassiveScenarioClassification (ja) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 73.88702084734365 - type: f1 value: 74.04474735232299 task: type: Classification - dataset: config: jv name: MTEB MassiveScenarioClassification (jv) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 56.63416274377943 - type: f1 value: 55.11332211687954 task: type: Classification - dataset: config: ka name: MTEB MassiveScenarioClassification (ka) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 52.23604572965702 - type: f1 value: 50.86529813991055 task: type: Classification - dataset: config: km name: MTEB MassiveScenarioClassification (km) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 46.62407531943511 - type: f1 value: 43.63485467164535 task: type: Classification - dataset: config: kn name: MTEB MassiveScenarioClassification (kn) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 59.15601882985878 - type: f1 value: 57.522837510959924 task: type: Classification - dataset: config: ko name: MTEB MassiveScenarioClassification (ko) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 69.84532616005382 - type: f1 value: 69.60021127179697 task: type: Classification - dataset: config: lv name: MTEB MassiveScenarioClassification (lv) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 56.65770006724949 - type: f1 value: 55.84219135523227 task: type: Classification - dataset: config: ml name: MTEB MassiveScenarioClassification (ml) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 66.53665097511768 - type: f1 value: 65.09087787792639 task: type: Classification - dataset: config: mn name: MTEB MassiveScenarioClassification (mn) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 59.31405514458642 - type: f1 value: 58.06135303831491 task: type: Classification - dataset: config: ms name: MTEB MassiveScenarioClassification (ms) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 64.88231338264964 - type: f1 value: 62.751099407787926 task: type: Classification - dataset: config: my name: MTEB MassiveScenarioClassification (my) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 58.86012104909213 - type: f1 value: 56.29118323058282 task: type: Classification - dataset: config: nb name: MTEB MassiveScenarioClassification (nb) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 67.37390719569602 - type: f1 value: 66.27922244885102 task: type: Classification - dataset: config: nl name: MTEB MassiveScenarioClassification (nl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 70.8675184936113 - type: f1 value: 70.22146529932019 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.2212508406187 - type: f1 value: 67.77454802056282 task: type: Classification - dataset: config: pt name: MTEB MassiveScenarioClassification (pt) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.18090114324143 - type: f1 value: 68.03737625431621 task: type: Classification - dataset: config: ro name: MTEB MassiveScenarioClassification (ro) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 64.65030262273034 - type: f1 value: 63.792945486912856 task: type: Classification - dataset: config: ru name: MTEB MassiveScenarioClassification (ru) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 63.772749631087066 - type: f1 value: 63.4539101720024 - type: f1_weighted value: 62.778603897469566 - type: main_score value: 63.772749631087066 task: type: Classification - dataset: config: sl name: MTEB MassiveScenarioClassification (sl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 60.17821116341627 - type: f1 value: 59.3935969827171 task: type: Classification - dataset: config: sq name: MTEB MassiveScenarioClassification (sq) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 62.86146603900471 - type: f1 value: 60.133692735032376 task: type: Classification - dataset: config: sv name: MTEB MassiveScenarioClassification (sv) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 70.89441829186282 - type: f1 value: 70.03064076194089 task: type: Classification - dataset: config: sw name: MTEB MassiveScenarioClassification (sw) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 58.15063887020847 - type: f1 value: 56.23326278499678 task: type: Classification - dataset: config: ta name: MTEB MassiveScenarioClassification (ta) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 59.43846671149966 - type: f1 value: 57.70440450281974 task: type: Classification - dataset: config: te name: MTEB MassiveScenarioClassification (te) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 60.8507061197041 - type: f1 value: 59.22916396061171 task: type: Classification - dataset: config: th name: MTEB MassiveScenarioClassification (th) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 70.65568258238063 - type: f1 value: 69.90736239440633 task: type: Classification - dataset: config: tl name: MTEB MassiveScenarioClassification (tl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 60.8843308675185 - type: f1 value: 59.30332663713599 task: type: Classification - dataset: config: tr name: MTEB MassiveScenarioClassification (tr) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.05312710154674 - type: f1 value: 67.44024062594775 task: type: Classification - dataset: config: ur name: MTEB MassiveScenarioClassification (ur) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 62.111634162743776 - type: f1 value: 60.89083013084519 task: type: Classification - dataset: config: vi name: MTEB MassiveScenarioClassification (vi) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 67.44115669132482 - type: f1 value: 67.92227541674552 task: type: Classification - dataset: config: zh-CN name: MTEB MassiveScenarioClassification (zh-CN) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 74.4687289845326 - type: f1 value: 74.16376793486025 task: type: Classification - dataset: config: zh-TW name: MTEB MassiveScenarioClassification (zh-TW) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 68.31876260928043 - type: f1 value: 68.5246745215607 task: type: Classification - dataset: config: default name: MTEB MedrxivClusteringP2P revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 split: test type: mteb/medrxiv-clustering-p2p metrics: - type: v_measure value: 30.90431696479766 task: type: Clustering - dataset: config: default name: MTEB MedrxivClusteringS2S revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 split: test type: mteb/medrxiv-clustering-s2s metrics: - type: v_measure value: 27.259158476693774 task: type: Clustering - dataset: config: default name: MTEB MindSmallReranking revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 split: test type: mteb/mind_small metrics: - type: map value: 30.28445330838555 - type: mrr value: 31.15758529581164 task: type: Reranking - dataset: config: default name: MTEB NFCorpus revision: None split: test type: nfcorpus metrics: - type: map_at_1 value: 5.353 - type: map_at_10 value: 11.565 - type: map_at_100 value: 14.097000000000001 - type: map_at_1000 value: 15.354999999999999 - type: map_at_3 value: 8.749 - type: map_at_5 value: 9.974 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.589 - type: mrr_at_100 value: 51.187000000000005 - type: mrr_at_1000 value: 51.233 - type: mrr_at_3 value: 48.246 - type: mrr_at_5 value: 49.546 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 31.009999999999998 - type: ndcg_at_100 value: 28.026 - type: ndcg_at_1000 value: 36.905 - type: ndcg_at_3 value: 35.983 - type: ndcg_at_5 value: 33.764 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.786 - type: precision_at_100 value: 6.916 - type: precision_at_1000 value: 1.981 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 28.731 - type: recall_at_1 value: 5.353 - type: recall_at_10 value: 15.039 - type: recall_at_100 value: 27.348 - type: recall_at_1000 value: 59.453 - type: recall_at_3 value: 9.792 - type: recall_at_5 value: 11.882 task: type: Retrieval - dataset: config: default name: MTEB NQ revision: None split: test type: nq metrics: - type: map_at_1 value: 33.852 - type: map_at_10 value: 48.924 - type: map_at_100 value: 49.854 - type: map_at_1000 value: 49.886 - type: map_at_3 value: 44.9 - type: map_at_5 value: 47.387 - type: mrr_at_1 value: 38.035999999999994 - type: mrr_at_10 value: 51.644 - type: mrr_at_100 value: 52.339 - type: mrr_at_1000 value: 52.35999999999999 - type: mrr_at_3 value: 48.421 - type: mrr_at_5 value: 50.468999999999994 - type: ndcg_at_1 value: 38.007000000000005 - type: ndcg_at_10 value: 56.293000000000006 - type: ndcg_at_100 value: 60.167 - type: ndcg_at_1000 value: 60.916000000000004 - type: ndcg_at_3 value: 48.903999999999996 - type: ndcg_at_5 value: 52.978 - type: precision_at_1 value: 38.007000000000005 - type: precision_at_10 value: 9.041 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 22.084 - type: precision_at_5 value: 15.608 - type: recall_at_1 value: 33.852 - type: recall_at_10 value: 75.893 - type: recall_at_100 value: 92.589 - type: recall_at_1000 value: 98.153 - type: recall_at_3 value: 56.969 - type: recall_at_5 value: 66.283 task: type: Retrieval - dataset: config: default name: MTEB QuoraRetrieval revision: None split: test type: quora metrics: - type: map_at_1 value: 69.174 - type: map_at_10 value: 82.891 - type: map_at_100 value: 83.545 - type: map_at_1000 value: 83.56700000000001 - type: map_at_3 value: 79.944 - type: map_at_5 value: 81.812 - type: mrr_at_1 value: 79.67999999999999 - type: mrr_at_10 value: 86.279 - type: mrr_at_100 value: 86.39 - type: mrr_at_1000 value: 86.392 - type: mrr_at_3 value: 85.21 - type: mrr_at_5 value: 85.92999999999999 - type: ndcg_at_1 value: 79.69000000000001 - type: ndcg_at_10 value: 86.929 - type: ndcg_at_100 value: 88.266 - type: ndcg_at_1000 value: 88.428 - type: ndcg_at_3 value: 83.899 - type: ndcg_at_5 value: 85.56700000000001 - type: precision_at_1 value: 79.69000000000001 - type: precision_at_10 value: 13.161000000000001 - type: precision_at_100 value: 1.513 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.603 - type: precision_at_5 value: 24.138 - type: recall_at_1 value: 69.174 - type: recall_at_10 value: 94.529 - type: recall_at_100 value: 99.15 - type: recall_at_1000 value: 99.925 - type: recall_at_3 value: 85.86200000000001 - type: recall_at_5 value: 90.501 task: type: Retrieval - dataset: config: default name: MTEB RedditClustering revision: 24640382cdbf8abc73003fb0fa6d111a705499eb split: test type: mteb/reddit-clustering metrics: - type: v_measure value: 39.13064340585255 task: type: Clustering - dataset: config: default name: MTEB RedditClusteringP2P revision: 282350215ef01743dc01b456c7f5241fa8937f16 split: test type: mteb/reddit-clustering-p2p metrics: - type: v_measure value: 58.97884249325877 task: type: Clustering - dataset: config: default name: MTEB SCIDOCS revision: None split: test type: scidocs metrics: - type: map_at_1 value: 3.4680000000000004 - type: map_at_10 value: 7.865 - type: map_at_100 value: 9.332 - type: map_at_1000 value: 9.587 - type: map_at_3 value: 5.800000000000001 - type: map_at_5 value: 6.8790000000000004 - type: mrr_at_1 value: 17.0 - type: mrr_at_10 value: 25.629 - type: mrr_at_100 value: 26.806 - type: mrr_at_1000 value: 26.889000000000003 - type: mrr_at_3 value: 22.8 - type: mrr_at_5 value: 24.26 - type: ndcg_at_1 value: 17.0 - type: ndcg_at_10 value: 13.895 - type: ndcg_at_100 value: 20.491999999999997 - type: ndcg_at_1000 value: 25.759999999999998 - type: ndcg_at_3 value: 13.347999999999999 - type: ndcg_at_5 value: 11.61 - type: precision_at_1 value: 17.0 - type: precision_at_10 value: 7.090000000000001 - type: precision_at_100 value: 1.669 - type: precision_at_1000 value: 0.294 - type: precision_at_3 value: 12.3 - type: precision_at_5 value: 10.02 - type: recall_at_1 value: 3.4680000000000004 - type: recall_at_10 value: 14.363000000000001 - type: recall_at_100 value: 33.875 - type: recall_at_1000 value: 59.711999999999996 - type: recall_at_3 value: 7.483 - type: recall_at_5 value: 10.173 task: type: Retrieval - dataset: config: default name: MTEB SICK-R revision: a6ea5a8cab320b040a23452cc28066d9beae2cee split: test type: mteb/sickr-sts metrics: - type: cos_sim_pearson value: 83.04084311714061 - type: cos_sim_spearman value: 77.51342467443078 - type: euclidean_pearson value: 80.0321166028479 - type: euclidean_spearman value: 77.29249114733226 - type: manhattan_pearson value: 80.03105964262431 - type: manhattan_spearman value: 77.22373689514794 task: type: STS - dataset: config: default name: MTEB STS12 revision: a0d554a64d88156834ff5ae9920b964011b16384 split: test type: mteb/sts12-sts metrics: - type: cos_sim_pearson value: 84.1680158034387 - type: cos_sim_spearman value: 76.55983344071117 - type: euclidean_pearson value: 79.75266678300143 - type: euclidean_spearman value: 75.34516823467025 - type: manhattan_pearson value: 79.75959151517357 - type: manhattan_spearman value: 75.42330344141912 task: type: STS - dataset: config: default name: MTEB STS13 revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca split: test type: mteb/sts13-sts metrics: - type: cos_sim_pearson value: 76.48898993209346 - type: cos_sim_spearman value: 76.96954120323366 - type: euclidean_pearson value: 76.94139109279668 - type: euclidean_spearman value: 76.85860283201711 - type: manhattan_pearson value: 76.6944095091912 - type: manhattan_spearman value: 76.61096912972553 task: type: STS - dataset: config: default name: MTEB STS14 revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 split: test type: mteb/sts14-sts metrics: - type: cos_sim_pearson value: 77.85082366246944 - type: cos_sim_spearman value: 75.52053350101731 - type: euclidean_pearson value: 77.1165845070926 - type: euclidean_spearman value: 75.31216065884388 - type: manhattan_pearson value: 77.06193941833494 - type: manhattan_spearman value: 75.31003701700112 task: type: STS - dataset: config: default name: MTEB STS15 revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 split: test type: mteb/sts15-sts metrics: - type: cos_sim_pearson value: 86.36305246526497 - type: cos_sim_spearman value: 87.11704613927415 - type: euclidean_pearson value: 86.04199125810939 - type: euclidean_spearman value: 86.51117572414263 - type: manhattan_pearson value: 86.0805106816633 - type: manhattan_spearman value: 86.52798366512229 task: type: STS - dataset: config: default name: MTEB STS16 revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 split: test type: mteb/sts16-sts metrics: - type: cos_sim_pearson value: 82.18536255599724 - type: cos_sim_spearman value: 83.63377151025418 - type: euclidean_pearson value: 83.24657467993141 - type: euclidean_spearman value: 84.02751481993825 - type: manhattan_pearson value: 83.11941806582371 - type: manhattan_spearman value: 83.84251281019304 task: type: STS - dataset: config: ko-ko name: MTEB STS17 (ko-ko) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 78.95816528475514 - type: cos_sim_spearman value: 78.86607380120462 - type: euclidean_pearson value: 78.51268699230545 - type: euclidean_spearman value: 79.11649316502229 - type: manhattan_pearson value: 78.32367302808157 - type: manhattan_spearman value: 78.90277699624637 task: type: STS - dataset: config: ar-ar name: MTEB STS17 (ar-ar) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 72.89126914997624 - type: cos_sim_spearman value: 73.0296921832678 - type: euclidean_pearson value: 71.50385903677738 - type: euclidean_spearman value: 73.13368899716289 - type: manhattan_pearson value: 71.47421463379519 - type: manhattan_spearman value: 73.03383242946575 task: type: STS - dataset: config: en-ar name: MTEB STS17 (en-ar) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 59.22923684492637 - type: cos_sim_spearman value: 57.41013211368396 - type: euclidean_pearson value: 61.21107388080905 - type: euclidean_spearman value: 60.07620768697254 - type: manhattan_pearson value: 59.60157142786555 - type: manhattan_spearman value: 59.14069604103739 task: type: STS - dataset: config: en-de name: MTEB STS17 (en-de) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 76.24345978774299 - type: cos_sim_spearman value: 77.24225743830719 - type: euclidean_pearson value: 76.66226095469165 - type: euclidean_spearman value: 77.60708820493146 - type: manhattan_pearson value: 76.05303324760429 - type: manhattan_spearman value: 76.96353149912348 task: type: STS - dataset: config: en-en name: MTEB STS17 (en-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 85.50879160160852 - type: cos_sim_spearman value: 86.43594662965224 - type: euclidean_pearson value: 86.06846012826577 - type: euclidean_spearman value: 86.02041395794136 - type: manhattan_pearson value: 86.10916255616904 - type: manhattan_spearman value: 86.07346068198953 task: type: STS - dataset: config: en-tr name: MTEB STS17 (en-tr) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 58.39803698977196 - type: cos_sim_spearman value: 55.96910950423142 - type: euclidean_pearson value: 58.17941175613059 - type: euclidean_spearman value: 55.03019330522745 - type: manhattan_pearson value: 57.333358138183286 - type: manhattan_spearman value: 54.04614023149965 task: type: STS - dataset: config: es-en name: MTEB STS17 (es-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 70.98304089637197 - type: cos_sim_spearman value: 72.44071656215888 - type: euclidean_pearson value: 72.19224359033983 - type: euclidean_spearman value: 73.89871188913025 - type: manhattan_pearson value: 71.21098311547406 - type: manhattan_spearman value: 72.93405764824821 task: type: STS - dataset: config: es-es name: MTEB STS17 (es-es) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 85.99792397466308 - type: cos_sim_spearman value: 84.83824377879495 - type: euclidean_pearson value: 85.70043288694438 - type: euclidean_spearman value: 84.70627558703686 - type: manhattan_pearson value: 85.89570850150801 - type: manhattan_spearman value: 84.95806105313007 task: type: STS - dataset: config: fr-en name: MTEB STS17 (fr-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 72.21850322994712 - type: cos_sim_spearman value: 72.28669398117248 - type: euclidean_pearson value: 73.40082510412948 - type: euclidean_spearman value: 73.0326539281865 - type: manhattan_pearson value: 71.8659633964841 - type: manhattan_spearman value: 71.57817425823303 task: type: STS - dataset: config: it-en name: MTEB STS17 (it-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 75.80921368595645 - type: cos_sim_spearman value: 77.33209091229315 - type: euclidean_pearson value: 76.53159540154829 - type: euclidean_spearman value: 78.17960842810093 - type: manhattan_pearson value: 76.13530186637601 - type: manhattan_spearman value: 78.00701437666875 task: type: STS - dataset: config: nl-en name: MTEB STS17 (nl-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 74.74980608267349 - type: cos_sim_spearman value: 75.37597374318821 - type: euclidean_pearson value: 74.90506081911661 - type: euclidean_spearman value: 75.30151613124521 - type: manhattan_pearson value: 74.62642745918002 - type: manhattan_spearman value: 75.18619716592303 task: type: STS - dataset: config: en name: MTEB STS22 (en) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 59.632662289205584 - type: cos_sim_spearman value: 60.938543391610914 - type: euclidean_pearson value: 62.113200529767056 - type: euclidean_spearman value: 61.410312633261164 - type: manhattan_pearson value: 61.75494698945686 - type: manhattan_spearman value: 60.92726195322362 task: type: STS - dataset: config: de name: MTEB STS22 (de) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 45.283470551557244 - type: cos_sim_spearman value: 53.44833015864201 - type: euclidean_pearson value: 41.17892011120893 - type: euclidean_spearman value: 53.81441383126767 - type: manhattan_pearson value: 41.17482200420659 - type: manhattan_spearman value: 53.82180269276363 task: type: STS - dataset: config: es name: MTEB STS22 (es) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 60.5069165306236 - type: cos_sim_spearman value: 66.87803259033826 - type: euclidean_pearson value: 63.5428979418236 - type: euclidean_spearman value: 66.9293576586897 - type: manhattan_pearson value: 63.59789526178922 - type: manhattan_spearman value: 66.86555009875066 task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 28.23026196280264 - type: cos_sim_spearman value: 35.79397812652861 - type: euclidean_pearson value: 17.828102102767353 - type: euclidean_spearman value: 35.721501145568894 - type: manhattan_pearson value: 17.77134274219677 - type: manhattan_spearman value: 35.98107902846267 task: type: STS - dataset: config: tr name: MTEB STS22 (tr) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 56.51946541393812 - type: cos_sim_spearman value: 63.714686006214485 - type: euclidean_pearson value: 58.32104651305898 - type: euclidean_spearman value: 62.237110895702216 - type: manhattan_pearson value: 58.579416468759185 - type: manhattan_spearman value: 62.459738981727 task: type: STS - dataset: config: ar name: MTEB STS22 (ar) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 48.76009839569795 - type: cos_sim_spearman value: 56.65188431953149 - type: euclidean_pearson value: 50.997682160915595 - type: euclidean_spearman value: 55.99910008818135 - type: manhattan_pearson value: 50.76220659606342 - type: manhattan_spearman value: 55.517347595391456 task: type: STS - dataset: config: ru name: MTEB STS22 (ru) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 50.724322379215934 - type: cosine_spearman value: 59.90449732164651 - type: euclidean_pearson value: 50.227545226784024 - type: euclidean_spearman value: 59.898906527601085 - type: main_score value: 59.90449732164651 - type: manhattan_pearson value: 50.21762139819405 - type: manhattan_spearman value: 59.761039813759 - type: pearson value: 50.724322379215934 - type: spearman value: 59.90449732164651 task: type: STS - dataset: config: zh name: MTEB STS22 (zh) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 54.717524559088005 - type: cos_sim_spearman value: 66.83570886252286 - type: euclidean_pearson value: 58.41338625505467 - type: euclidean_spearman value: 66.68991427704938 - type: manhattan_pearson value: 58.78638572916807 - type: manhattan_spearman value: 66.58684161046335 task: type: STS - dataset: config: fr name: MTEB STS22 (fr) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 73.2962042954962 - type: cos_sim_spearman value: 76.58255504852025 - type: euclidean_pearson value: 75.70983192778257 - type: euclidean_spearman value: 77.4547684870542 - type: manhattan_pearson value: 75.75565853870485 - type: manhattan_spearman value: 76.90208974949428 task: type: STS - dataset: config: de-en name: MTEB STS22 (de-en) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 54.47396266924846 - type: cos_sim_spearman value: 56.492267162048606 - type: euclidean_pearson value: 55.998505203070195 - type: euclidean_spearman value: 56.46447012960222 - type: manhattan_pearson value: 54.873172394430995 - type: manhattan_spearman value: 56.58111534551218 task: type: STS - dataset: config: es-en name: MTEB STS22 (es-en) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 69.87177267688686 - type: cos_sim_spearman value: 74.57160943395763 - type: euclidean_pearson value: 70.88330406826788 - type: euclidean_spearman value: 74.29767636038422 - type: manhattan_pearson value: 71.38245248369536 - type: manhattan_spearman value: 74.53102232732175 task: type: STS - dataset: config: it name: MTEB STS22 (it) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 72.80225656959544 - type: cos_sim_spearman value: 76.52646173725735 - type: euclidean_pearson value: 73.95710720200799 - type: euclidean_spearman value: 76.54040031984111 - type: manhattan_pearson value: 73.89679971946774 - type: manhattan_spearman value: 76.60886958161574 task: type: STS - dataset: config: pl-en name: MTEB STS22 (pl-en) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 70.70844249898789 - type: cos_sim_spearman value: 72.68571783670241 - type: euclidean_pearson value: 72.38800772441031 - type: euclidean_spearman value: 72.86804422703312 - type: manhattan_pearson value: 71.29840508203515 - type: manhattan_spearman value: 71.86264441749513 task: type: STS - dataset: config: zh-en name: MTEB STS22 (zh-en) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 58.647478923935694 - type: cos_sim_spearman value: 63.74453623540931 - type: euclidean_pearson value: 59.60138032437505 - type: euclidean_spearman value: 63.947930832166065 - type: manhattan_pearson value: 58.59735509491861 - type: manhattan_spearman value: 62.082503844627404 task: type: STS - dataset: config: es-it name: MTEB STS22 (es-it) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 65.8722516867162 - type: cos_sim_spearman value: 71.81208592523012 - type: euclidean_pearson value: 67.95315252165956 - type: euclidean_spearman value: 73.00749822046009 - type: manhattan_pearson value: 68.07884688638924 - type: manhattan_spearman value: 72.34210325803069 task: type: STS - dataset: config: de-fr name: MTEB STS22 (de-fr) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 54.5405814240949 - type: cos_sim_spearman value: 60.56838649023775 - type: euclidean_pearson value: 53.011731611314104 - type: euclidean_spearman value: 58.533194841668426 - type: manhattan_pearson value: 53.623067729338494 - type: manhattan_spearman value: 58.018756154446926 task: type: STS - dataset: config: de-pl name: MTEB STS22 (de-pl) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 13.611046866216112 - type: cos_sim_spearman value: 28.238192909158492 - type: euclidean_pearson value: 22.16189199885129 - type: euclidean_spearman value: 35.012895679076564 - type: manhattan_pearson value: 21.969771178698387 - type: manhattan_spearman value: 32.456985088607475 task: type: STS - dataset: config: fr-pl name: MTEB STS22 (fr-pl) revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 74.58077407011655 - type: cos_sim_spearman value: 84.51542547285167 - type: euclidean_pearson value: 74.64613843596234 - type: euclidean_spearman value: 84.51542547285167 - type: manhattan_pearson value: 75.15335973101396 - type: manhattan_spearman value: 84.51542547285167 task: type: STS - dataset: config: default name: MTEB STSBenchmark revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cos_sim_pearson value: 82.0739825531578 - type: cos_sim_spearman value: 84.01057479311115 - type: euclidean_pearson value: 83.85453227433344 - type: euclidean_spearman value: 84.01630226898655 - type: manhattan_pearson value: 83.75323603028978 - type: manhattan_spearman value: 83.89677983727685 task: type: STS - dataset: config: default name: MTEB SciDocsRR revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab split: test type: mteb/scidocs-reranking metrics: - type: map value: 78.12945623123957 - type: mrr value: 93.87738713719106 task: type: Reranking - dataset: config: default name: MTEB SciFact revision: None split: test type: scifact metrics: - type: map_at_1 value: 52.983000000000004 - type: map_at_10 value: 62.946000000000005 - type: map_at_100 value: 63.514 - type: map_at_1000 value: 63.554 - type: map_at_3 value: 60.183 - type: map_at_5 value: 61.672000000000004 - type: mrr_at_1 value: 55.667 - type: mrr_at_10 value: 64.522 - type: mrr_at_100 value: 64.957 - type: mrr_at_1000 value: 64.995 - type: mrr_at_3 value: 62.388999999999996 - type: mrr_at_5 value: 63.639 - type: ndcg_at_1 value: 55.667 - type: ndcg_at_10 value: 67.704 - type: ndcg_at_100 value: 70.299 - type: ndcg_at_1000 value: 71.241 - type: ndcg_at_3 value: 62.866 - type: ndcg_at_5 value: 65.16999999999999 - type: precision_at_1 value: 55.667 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 24.444 - type: precision_at_5 value: 16.133 - type: recall_at_1 value: 52.983000000000004 - type: recall_at_10 value: 80.656 - type: recall_at_100 value: 92.5 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 67.744 - type: recall_at_5 value: 73.433 task: type: Retrieval - dataset: config: default name: MTEB SprintDuplicateQuestions revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 split: test type: mteb/sprintduplicatequestions-pairclassification metrics: - type: cos_sim_accuracy value: 99.72772277227723 - type: cos_sim_ap value: 92.17845897992215 - type: cos_sim_f1 value: 85.9746835443038 - type: cos_sim_precision value: 87.07692307692308 - type: cos_sim_recall value: 84.89999999999999 - type: dot_accuracy value: 99.3039603960396 - type: dot_ap value: 60.70244020124878 - type: dot_f1 value: 59.92742353551063 - type: dot_precision value: 62.21743810548978 - type: dot_recall value: 57.8 - type: euclidean_accuracy value: 99.71683168316832 - type: euclidean_ap value: 91.53997039964659 - type: euclidean_f1 value: 84.88372093023257 - type: euclidean_precision value: 90.02242152466367 - type: euclidean_recall value: 80.30000000000001 - type: manhattan_accuracy value: 99.72376237623763 - type: manhattan_ap value: 91.80756777790289 - type: manhattan_f1 value: 85.48468106479157 - type: manhattan_precision value: 85.8728557013118 - type: manhattan_recall value: 85.1 - type: max_accuracy value: 99.72772277227723 - type: max_ap value: 92.17845897992215 - type: max_f1 value: 85.9746835443038 task: type: PairClassification - dataset: config: default name: MTEB StackExchangeClustering revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 split: test type: mteb/stackexchange-clustering metrics: - type: v_measure value: 53.52464042600003 task: type: Clustering - dataset: config: default name: MTEB StackExchangeClusteringP2P revision: 815ca46b2622cec33ccafc3735d572c266efdb44 split: test type: mteb/stackexchange-clustering-p2p metrics: - type: v_measure value: 32.071631948736 task: type: Clustering - dataset: config: default name: MTEB StackOverflowDupQuestions revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 split: test type: mteb/stackoverflowdupquestions-reranking metrics: - type: map value: 49.19552407604654 - type: mrr value: 49.95269130379425 task: type: Reranking - dataset: config: default name: MTEB SummEval revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c split: test type: mteb/summeval metrics: - type: cos_sim_pearson value: 29.345293033095427 - type: cos_sim_spearman value: 29.976931423258403 - type: dot_pearson value: 27.047078008958408 - type: dot_spearman value: 27.75894368380218 task: type: Summarization - dataset: config: default name: MTEB TRECCOVID revision: None split: test type: trec-covid metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.706 - type: map_at_100 value: 9.634 - type: map_at_1000 value: 23.665 - type: map_at_3 value: 0.5950000000000001 - type: map_at_5 value: 0.95 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 80.0 - type: ndcg_at_10 value: 72.573 - type: ndcg_at_100 value: 53.954 - type: ndcg_at_1000 value: 47.760999999999996 - type: ndcg_at_3 value: 76.173 - type: ndcg_at_5 value: 75.264 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 76.4 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.802 - type: precision_at_3 value: 81.333 - type: precision_at_5 value: 80.4 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 1.925 - type: recall_at_100 value: 12.762 - type: recall_at_1000 value: 44.946000000000005 - type: recall_at_3 value: 0.634 - type: recall_at_5 value: 1.051 task: type: Retrieval - dataset: config: sqi-eng name: MTEB Tatoeba (sqi-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 91.0 - type: f1 value: 88.55666666666666 - type: precision value: 87.46166666666667 - type: recall value: 91.0 task: type: BitextMining - dataset: config: fry-eng name: MTEB Tatoeba (fry-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 57.22543352601156 - type: f1 value: 51.03220478943021 - type: precision value: 48.8150289017341 - type: recall value: 57.22543352601156 task: type: BitextMining - dataset: config: kur-eng name: MTEB Tatoeba (kur-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 46.58536585365854 - type: f1 value: 39.66870798578116 - type: precision value: 37.416085946573745 - type: recall value: 46.58536585365854 task: type: BitextMining - dataset: config: tur-eng name: MTEB Tatoeba (tur-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 89.7 - type: f1 value: 86.77999999999999 - type: precision value: 85.45333333333332 - type: recall value: 89.7 task: type: BitextMining - dataset: config: deu-eng name: MTEB Tatoeba (deu-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.58333333333331 - type: precision value: 96.2 - type: recall value: 97.39999999999999 task: type: BitextMining - dataset: config: nld-eng name: MTEB Tatoeba (nld-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 92.4 - type: f1 value: 90.3 - type: precision value: 89.31666666666668 - type: recall value: 92.4 task: type: BitextMining - dataset: config: ron-eng name: MTEB Tatoeba (ron-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 86.9 - type: f1 value: 83.67190476190476 - type: precision value: 82.23333333333332 - type: recall value: 86.9 task: type: BitextMining - dataset: config: ang-eng name: MTEB Tatoeba (ang-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 50.0 - type: f1 value: 42.23229092632078 - type: precision value: 39.851634683724235 - type: recall value: 50.0 task: type: BitextMining - dataset: config: ido-eng name: MTEB Tatoeba (ido-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 76.3 - type: f1 value: 70.86190476190477 - type: precision value: 68.68777777777777 - type: recall value: 76.3 task: type: BitextMining - dataset: config: jav-eng name: MTEB Tatoeba (jav-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 57.073170731707314 - type: f1 value: 50.658958927251604 - type: precision value: 48.26480836236933 - type: recall value: 57.073170731707314 task: type: BitextMining - dataset: config: isl-eng name: MTEB Tatoeba (isl-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 68.2 - type: f1 value: 62.156507936507936 - type: precision value: 59.84964285714286 - type: recall value: 68.2 task: type: BitextMining - dataset: config: slv-eng name: MTEB Tatoeba (slv-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 77.52126366950182 - type: f1 value: 72.8496210148701 - type: precision value: 70.92171498003819 - type: recall value: 77.52126366950182 task: type: BitextMining - dataset: config: cym-eng name: MTEB Tatoeba (cym-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 70.78260869565217 - type: f1 value: 65.32422360248447 - type: precision value: 63.063067367415194 - type: recall value: 70.78260869565217 task: type: BitextMining - dataset: config: kaz-eng name: MTEB Tatoeba (kaz-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 78.43478260869566 - type: f1 value: 73.02608695652172 - type: precision value: 70.63768115942028 - type: recall value: 78.43478260869566 task: type: BitextMining - dataset: config: est-eng name: MTEB Tatoeba (est-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 60.9 - type: f1 value: 55.309753694581275 - type: precision value: 53.130476190476195 - type: recall value: 60.9 task: type: BitextMining - dataset: config: heb-eng name: MTEB Tatoeba (heb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 72.89999999999999 - type: f1 value: 67.92023809523809 - type: precision value: 65.82595238095237 - type: recall value: 72.89999999999999 task: type: BitextMining - dataset: config: gla-eng name: MTEB Tatoeba (gla-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 46.80337756332931 - type: f1 value: 39.42174900558496 - type: precision value: 36.97101116280851 - type: recall value: 46.80337756332931 task: type: BitextMining - dataset: config: mar-eng name: MTEB Tatoeba (mar-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 89.8 - type: f1 value: 86.79 - type: precision value: 85.375 - type: recall value: 89.8 task: type: BitextMining - dataset: config: lat-eng name: MTEB Tatoeba (lat-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 47.199999999999996 - type: f1 value: 39.95484348984349 - type: precision value: 37.561071428571424 - type: recall value: 47.199999999999996 task: type: BitextMining - dataset: config: bel-eng name: MTEB Tatoeba (bel-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 87.8 - type: f1 value: 84.68190476190475 - type: precision value: 83.275 - type: recall value: 87.8 task: type: BitextMining - dataset: config: pms-eng name: MTEB Tatoeba (pms-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 48.76190476190476 - type: f1 value: 42.14965986394558 - type: precision value: 39.96743626743626 - type: recall value: 48.76190476190476 task: type: BitextMining - dataset: config: gle-eng name: MTEB Tatoeba (gle-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 66.10000000000001 - type: f1 value: 59.58580086580086 - type: precision value: 57.150238095238095 - type: recall value: 66.10000000000001 task: type: BitextMining - dataset: config: pes-eng name: MTEB Tatoeba (pes-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 87.3 - type: f1 value: 84.0 - type: precision value: 82.48666666666666 - type: recall value: 87.3 task: type: BitextMining - dataset: config: nob-eng name: MTEB Tatoeba (nob-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 90.4 - type: f1 value: 87.79523809523809 - type: precision value: 86.6 - type: recall value: 90.4 task: type: BitextMining - dataset: config: bul-eng name: MTEB Tatoeba (bul-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 87.0 - type: f1 value: 83.81 - type: precision value: 82.36666666666666 - type: recall value: 87.0 task: type: BitextMining - dataset: config: cbk-eng name: MTEB Tatoeba (cbk-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 63.9 - type: f1 value: 57.76533189033189 - type: precision value: 55.50595238095239 - type: recall value: 63.9 task: type: BitextMining - dataset: config: hun-eng name: MTEB Tatoeba (hun-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 76.1 - type: f1 value: 71.83690476190478 - type: precision value: 70.04928571428573 - type: recall value: 76.1 task: type: BitextMining - dataset: config: uig-eng name: MTEB Tatoeba (uig-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 66.3 - type: f1 value: 59.32626984126984 - type: precision value: 56.62535714285713 - type: recall value: 66.3 task: type: BitextMining - dataset: config: rus-eng name: MTEB Tatoeba (rus-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.76666666666667 - type: main_score value: 89.76666666666667 - type: precision value: 88.64999999999999 - type: recall value: 92.10000000000001 task: type: BitextMining - dataset: config: spa-eng name: MTEB Tatoeba (spa-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.16666666666666 - type: recall value: 93.10000000000001 task: type: BitextMining - dataset: config: hye-eng name: MTEB Tatoeba (hye-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 85.71428571428571 - type: f1 value: 82.29142600436403 - type: precision value: 80.8076626877166 - type: recall value: 85.71428571428571 task: type: BitextMining - dataset: config: tel-eng name: MTEB Tatoeba (tel-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.88888888888889 - type: f1 value: 85.7834757834758 - type: precision value: 84.43732193732193 - type: recall value: 88.88888888888889 task: type: BitextMining - dataset: config: afr-eng name: MTEB Tatoeba (afr-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.5 - type: f1 value: 85.67190476190476 - type: precision value: 84.43333333333332 - type: recall value: 88.5 task: type: BitextMining - dataset: config: mon-eng name: MTEB Tatoeba (mon-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 82.72727272727273 - type: f1 value: 78.21969696969695 - type: precision value: 76.18181818181819 - type: recall value: 82.72727272727273 task: type: BitextMining - dataset: config: arz-eng name: MTEB Tatoeba (arz-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 61.0062893081761 - type: f1 value: 55.13976240391334 - type: precision value: 52.92112499659669 - type: recall value: 61.0062893081761 task: type: BitextMining - dataset: config: hrv-eng name: MTEB Tatoeba (hrv-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 89.5 - type: f1 value: 86.86666666666666 - type: precision value: 85.69166666666668 - type: recall value: 89.5 task: type: BitextMining - dataset: config: nov-eng name: MTEB Tatoeba (nov-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 73.54085603112841 - type: f1 value: 68.56031128404669 - type: precision value: 66.53047989623866 - type: recall value: 73.54085603112841 task: type: BitextMining - dataset: config: gsw-eng name: MTEB Tatoeba (gsw-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 43.58974358974359 - type: f1 value: 36.45299145299145 - type: precision value: 33.81155881155882 - type: recall value: 43.58974358974359 task: type: BitextMining - dataset: config: nds-eng name: MTEB Tatoeba (nds-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 59.599999999999994 - type: f1 value: 53.264689754689755 - type: precision value: 50.869166666666665 - type: recall value: 59.599999999999994 task: type: BitextMining - dataset: config: ukr-eng name: MTEB Tatoeba (ukr-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 85.2 - type: f1 value: 81.61666666666665 - type: precision value: 80.02833333333335 - type: recall value: 85.2 task: type: BitextMining - dataset: config: uzb-eng name: MTEB Tatoeba (uzb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 63.78504672897196 - type: f1 value: 58.00029669188548 - type: precision value: 55.815809968847354 - type: recall value: 63.78504672897196 task: type: BitextMining - dataset: config: lit-eng name: MTEB Tatoeba (lit-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 66.5 - type: f1 value: 61.518333333333345 - type: precision value: 59.622363699102834 - type: recall value: 66.5 task: type: BitextMining - dataset: config: ina-eng name: MTEB Tatoeba (ina-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.6 - type: f1 value: 85.60222222222221 - type: precision value: 84.27916666666665 - type: recall value: 88.6 task: type: BitextMining - dataset: config: lfn-eng name: MTEB Tatoeba (lfn-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 58.699999999999996 - type: f1 value: 52.732375957375965 - type: precision value: 50.63214035964035 - type: recall value: 58.699999999999996 task: type: BitextMining - dataset: config: zsm-eng name: MTEB Tatoeba (zsm-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.99666666666667 - type: precision value: 89.03333333333333 - type: recall value: 92.10000000000001 task: type: BitextMining - dataset: config: ita-eng name: MTEB Tatoeba (ita-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 90.10000000000001 - type: f1 value: 87.55666666666667 - type: precision value: 86.36166666666668 - type: recall value: 90.10000000000001 task: type: BitextMining - dataset: config: cmn-eng name: MTEB Tatoeba (cmn-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 91.4 - type: f1 value: 88.89000000000001 - type: precision value: 87.71166666666666 - type: recall value: 91.4 task: type: BitextMining - dataset: config: lvs-eng name: MTEB Tatoeba (lvs-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 65.7 - type: f1 value: 60.67427750410509 - type: precision value: 58.71785714285714 - type: recall value: 65.7 task: type: BitextMining - dataset: config: glg-eng name: MTEB Tatoeba (glg-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 85.39999999999999 - type: f1 value: 81.93190476190475 - type: precision value: 80.37833333333333 - type: recall value: 85.39999999999999 task: type: BitextMining - dataset: config: ceb-eng name: MTEB Tatoeba (ceb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 47.833333333333336 - type: f1 value: 42.006625781625786 - type: precision value: 40.077380952380956 - type: recall value: 47.833333333333336 task: type: BitextMining - dataset: config: bre-eng name: MTEB Tatoeba (bre-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 10.4 - type: f1 value: 8.24465007215007 - type: precision value: 7.664597069597071 - type: recall value: 10.4 task: type: BitextMining - dataset: config: ben-eng name: MTEB Tatoeba (ben-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 82.6 - type: f1 value: 77.76333333333334 - type: precision value: 75.57833333333332 - type: recall value: 82.6 task: type: BitextMining - dataset: config: swg-eng name: MTEB Tatoeba (swg-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 52.67857142857143 - type: f1 value: 44.302721088435376 - type: precision value: 41.49801587301587 - type: recall value: 52.67857142857143 task: type: BitextMining - dataset: config: arq-eng name: MTEB Tatoeba (arq-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 28.3205268935236 - type: f1 value: 22.426666605171157 - type: precision value: 20.685900116470915 - type: recall value: 28.3205268935236 task: type: BitextMining - dataset: config: kab-eng name: MTEB Tatoeba (kab-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 22.7 - type: f1 value: 17.833970473970474 - type: precision value: 16.407335164835164 - type: recall value: 22.7 task: type: BitextMining - dataset: config: fra-eng name: MTEB Tatoeba (fra-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 92.2 - type: f1 value: 89.92999999999999 - type: precision value: 88.87 - type: recall value: 92.2 task: type: BitextMining - dataset: config: por-eng name: MTEB Tatoeba (por-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 91.4 - type: f1 value: 89.25 - type: precision value: 88.21666666666667 - type: recall value: 91.4 task: type: BitextMining - dataset: config: tat-eng name: MTEB Tatoeba (tat-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 69.19999999999999 - type: f1 value: 63.38269841269841 - type: precision value: 61.14773809523809 - type: recall value: 69.19999999999999 task: type: BitextMining - dataset: config: oci-eng name: MTEB Tatoeba (oci-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 48.8 - type: f1 value: 42.839915639915645 - type: precision value: 40.770287114845935 - type: recall value: 48.8 task: type: BitextMining - dataset: config: pol-eng name: MTEB Tatoeba (pol-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.8 - type: f1 value: 85.90666666666668 - type: precision value: 84.54166666666666 - type: recall value: 88.8 task: type: BitextMining - dataset: config: war-eng name: MTEB Tatoeba (war-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 46.6 - type: f1 value: 40.85892920804686 - type: precision value: 38.838223114604695 - type: recall value: 46.6 task: type: BitextMining - dataset: config: aze-eng name: MTEB Tatoeba (aze-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 84.0 - type: f1 value: 80.14190476190475 - type: precision value: 78.45333333333333 - type: recall value: 84.0 task: type: BitextMining - dataset: config: vie-eng name: MTEB Tatoeba (vie-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 90.5 - type: f1 value: 87.78333333333333 - type: precision value: 86.5 - type: recall value: 90.5 task: type: BitextMining - dataset: config: nno-eng name: MTEB Tatoeba (nno-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 74.5 - type: f1 value: 69.48397546897547 - type: precision value: 67.51869047619049 - type: recall value: 74.5 task: type: BitextMining - dataset: config: cha-eng name: MTEB Tatoeba (cha-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 32.846715328467155 - type: f1 value: 27.828177499710343 - type: precision value: 26.63451511991658 - type: recall value: 32.846715328467155 task: type: BitextMining - dataset: config: mhr-eng name: MTEB Tatoeba (mhr-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 8.0 - type: f1 value: 6.07664116764988 - type: precision value: 5.544177607179943 - type: recall value: 8.0 task: type: BitextMining - dataset: config: dan-eng name: MTEB Tatoeba (dan-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 87.6 - type: f1 value: 84.38555555555554 - type: precision value: 82.91583333333334 - type: recall value: 87.6 task: type: BitextMining - dataset: config: ell-eng name: MTEB Tatoeba (ell-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 87.5 - type: f1 value: 84.08333333333331 - type: precision value: 82.47333333333333 - type: recall value: 87.5 task: type: BitextMining - dataset: config: amh-eng name: MTEB Tatoeba (amh-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 80.95238095238095 - type: f1 value: 76.13095238095238 - type: precision value: 74.05753968253967 - type: recall value: 80.95238095238095 task: type: BitextMining - dataset: config: pam-eng name: MTEB Tatoeba (pam-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 8.799999999999999 - type: f1 value: 6.971422975172975 - type: precision value: 6.557814916172301 - type: recall value: 8.799999999999999 task: type: BitextMining - dataset: config: hsb-eng name: MTEB Tatoeba (hsb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 44.099378881987576 - type: f1 value: 37.01649742022413 - type: precision value: 34.69420618488942 - type: recall value: 44.099378881987576 task: type: BitextMining - dataset: config: srp-eng name: MTEB Tatoeba (srp-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 84.3 - type: f1 value: 80.32666666666667 - type: precision value: 78.60666666666665 - type: recall value: 84.3 task: type: BitextMining - dataset: config: epo-eng name: MTEB Tatoeba (epo-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 92.5 - type: f1 value: 90.49666666666666 - type: precision value: 89.56666666666668 - type: recall value: 92.5 task: type: BitextMining - dataset: config: kzj-eng name: MTEB Tatoeba (kzj-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 10.0 - type: f1 value: 8.268423529875141 - type: precision value: 7.878118605532398 - type: recall value: 10.0 task: type: BitextMining - dataset: config: awa-eng name: MTEB Tatoeba (awa-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 79.22077922077922 - type: f1 value: 74.27128427128426 - type: precision value: 72.28715728715729 - type: recall value: 79.22077922077922 task: type: BitextMining - dataset: config: fao-eng name: MTEB Tatoeba (fao-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 65.64885496183206 - type: f1 value: 58.87495456197747 - type: precision value: 55.992366412213734 - type: recall value: 65.64885496183206 task: type: BitextMining - dataset: config: mal-eng name: MTEB Tatoeba (mal-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 96.06986899563319 - type: f1 value: 94.78408539543909 - type: precision value: 94.15332362930616 - type: recall value: 96.06986899563319 task: type: BitextMining - dataset: config: ile-eng name: MTEB Tatoeba (ile-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 77.2 - type: f1 value: 71.72571428571428 - type: precision value: 69.41000000000001 - type: recall value: 77.2 task: type: BitextMining - dataset: config: bos-eng name: MTEB Tatoeba (bos-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 86.4406779661017 - type: f1 value: 83.2391713747646 - type: precision value: 81.74199623352166 - type: recall value: 86.4406779661017 task: type: BitextMining - dataset: config: cor-eng name: MTEB Tatoeba (cor-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 8.4 - type: f1 value: 6.017828743398003 - type: precision value: 5.4829865484756795 - type: recall value: 8.4 task: type: BitextMining - dataset: config: cat-eng name: MTEB Tatoeba (cat-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 83.5 - type: f1 value: 79.74833333333333 - type: precision value: 78.04837662337664 - type: recall value: 83.5 task: type: BitextMining - dataset: config: eus-eng name: MTEB Tatoeba (eus-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 60.4 - type: f1 value: 54.467301587301584 - type: precision value: 52.23242424242424 - type: recall value: 60.4 task: type: BitextMining - dataset: config: yue-eng name: MTEB Tatoeba (yue-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 74.9 - type: f1 value: 69.68699134199134 - type: precision value: 67.59873015873016 - type: recall value: 74.9 task: type: BitextMining - dataset: config: swe-eng name: MTEB Tatoeba (swe-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.0 - type: f1 value: 84.9652380952381 - type: precision value: 83.66166666666666 - type: recall value: 88.0 task: type: BitextMining - dataset: config: dtp-eng name: MTEB Tatoeba (dtp-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 9.1 - type: f1 value: 7.681244588744588 - type: precision value: 7.370043290043291 - type: recall value: 9.1 task: type: BitextMining - dataset: config: kat-eng name: MTEB Tatoeba (kat-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 80.9651474530831 - type: f1 value: 76.84220605132133 - type: precision value: 75.19606398962966 - type: recall value: 80.9651474530831 task: type: BitextMining - dataset: config: jpn-eng name: MTEB Tatoeba (jpn-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 86.9 - type: f1 value: 83.705 - type: precision value: 82.3120634920635 - type: recall value: 86.9 task: type: BitextMining - dataset: config: csb-eng name: MTEB Tatoeba (csb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 23.98763072676116 - type: precision value: 22.506399397703746 - type: recall value: 29.64426877470356 task: type: BitextMining - dataset: config: xho-eng name: MTEB Tatoeba (xho-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 70.4225352112676 - type: f1 value: 62.84037558685445 - type: precision value: 59.56572769953053 - type: recall value: 70.4225352112676 task: type: BitextMining - dataset: config: orv-eng name: MTEB Tatoeba (orv-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 19.64071856287425 - type: f1 value: 15.125271011207756 - type: precision value: 13.865019261197494 - type: recall value: 19.64071856287425 task: type: BitextMining - dataset: config: ind-eng name: MTEB Tatoeba (ind-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 90.2 - type: f1 value: 87.80666666666666 - type: precision value: 86.70833333333331 - type: recall value: 90.2 task: type: BitextMining - dataset: config: tuk-eng name: MTEB Tatoeba (tuk-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 23.15270935960591 - type: f1 value: 18.407224958949097 - type: precision value: 16.982385430661292 - type: recall value: 23.15270935960591 task: type: BitextMining - dataset: config: max-eng name: MTEB Tatoeba (max-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 55.98591549295775 - type: f1 value: 49.94718309859154 - type: precision value: 47.77864154624717 - type: recall value: 55.98591549295775 task: type: BitextMining - dataset: config: swh-eng name: MTEB Tatoeba (swh-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 73.07692307692307 - type: f1 value: 66.74358974358974 - type: precision value: 64.06837606837607 - type: recall value: 73.07692307692307 task: type: BitextMining - dataset: config: hin-eng name: MTEB Tatoeba (hin-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.25 - type: precision value: 92.43333333333332 - type: recall value: 94.89999999999999 task: type: BitextMining - dataset: config: dsb-eng name: MTEB Tatoeba (dsb-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 37.78705636743215 - type: f1 value: 31.63899658680452 - type: precision value: 29.72264397629742 - type: recall value: 37.78705636743215 task: type: BitextMining - dataset: config: ber-eng name: MTEB Tatoeba (ber-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 21.6 - type: f1 value: 16.91697302697303 - type: precision value: 15.71225147075147 - type: recall value: 21.6 task: type: BitextMining - dataset: config: tam-eng name: MTEB Tatoeba (tam-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 85.01628664495115 - type: f1 value: 81.38514037536838 - type: precision value: 79.83170466883823 - type: recall value: 85.01628664495115 task: type: BitextMining - dataset: config: slk-eng name: MTEB Tatoeba (slk-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 83.39999999999999 - type: f1 value: 79.96380952380952 - type: precision value: 78.48333333333333 - type: recall value: 83.39999999999999 task: type: BitextMining - dataset: config: tgl-eng name: MTEB Tatoeba (tgl-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 83.2 - type: f1 value: 79.26190476190476 - type: precision value: 77.58833333333334 - type: recall value: 83.2 task: type: BitextMining - dataset: config: ast-eng name: MTEB Tatoeba (ast-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 75.59055118110236 - type: f1 value: 71.66854143232096 - type: precision value: 70.30183727034121 - type: recall value: 75.59055118110236 task: type: BitextMining - dataset: config: mkd-eng name: MTEB Tatoeba (mkd-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 65.5 - type: f1 value: 59.26095238095238 - type: precision value: 56.81909090909092 - type: recall value: 65.5 task: type: BitextMining - dataset: config: khm-eng name: MTEB Tatoeba (khm-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 55.26315789473685 - type: f1 value: 47.986523325858506 - type: precision value: 45.33950006595436 - type: recall value: 55.26315789473685 task: type: BitextMining - dataset: config: ces-eng name: MTEB Tatoeba (ces-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 82.89999999999999 - type: f1 value: 78.835 - type: precision value: 77.04761904761905 - type: recall value: 82.89999999999999 task: type: BitextMining - dataset: config: tzl-eng name: MTEB Tatoeba (tzl-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 43.269230769230774 - type: f1 value: 36.20421245421245 - type: precision value: 33.57371794871795 - type: recall value: 43.269230769230774 task: type: BitextMining - dataset: config: urd-eng name: MTEB Tatoeba (urd-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 88.0 - type: f1 value: 84.70666666666666 - type: precision value: 83.23166666666665 - type: recall value: 88.0 task: type: BitextMining - dataset: config: ara-eng name: MTEB Tatoeba (ara-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 77.4 - type: f1 value: 72.54666666666667 - type: precision value: 70.54318181818181 - type: recall value: 77.4 task: type: BitextMining - dataset: config: kor-eng name: MTEB Tatoeba (kor-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 78.60000000000001 - type: f1 value: 74.1588888888889 - type: precision value: 72.30250000000001 - type: recall value: 78.60000000000001 task: type: BitextMining - dataset: config: yid-eng name: MTEB Tatoeba (yid-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 72.40566037735849 - type: f1 value: 66.82587328813744 - type: precision value: 64.75039308176099 - type: recall value: 72.40566037735849 task: type: BitextMining - dataset: config: fin-eng name: MTEB Tatoeba (fin-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 73.8 - type: f1 value: 68.56357142857144 - type: precision value: 66.3178822055138 - type: recall value: 73.8 task: type: BitextMining - dataset: config: tha-eng name: MTEB Tatoeba (tha-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 91.78832116788321 - type: f1 value: 89.3552311435523 - type: precision value: 88.20559610705597 - type: recall value: 91.78832116788321 task: type: BitextMining - dataset: config: wuu-eng name: MTEB Tatoeba (wuu-eng) revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 74.3 - type: f1 value: 69.05085581085581 - type: precision value: 66.955 - type: recall value: 74.3 task: type: BitextMining - dataset: config: default name: MTEB Touche2020 revision: None split: test type: webis-touche2020 metrics: - type: map_at_1 value: 2.896 - type: map_at_10 value: 8.993 - type: map_at_100 value: 14.133999999999999 - type: map_at_1000 value: 15.668000000000001 - type: map_at_3 value: 5.862 - type: map_at_5 value: 7.17 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 42.931000000000004 - type: mrr_at_100 value: 44.81 - type: mrr_at_1000 value: 44.81 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.701 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 21.163 - type: ndcg_at_100 value: 33.306000000000004 - type: ndcg_at_1000 value: 45.275999999999996 - type: ndcg_at_3 value: 25.685999999999996 - type: ndcg_at_5 value: 23.732 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 17.755000000000003 - type: precision_at_100 value: 6.938999999999999 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 2.896 - type: recall_at_10 value: 13.333999999999998 - type: recall_at_100 value: 43.517 - type: recall_at_1000 value: 79.836 - type: recall_at_3 value: 6.306000000000001 - type: recall_at_5 value: 8.825 task: type: Retrieval - dataset: config: default name: MTEB ToxicConversationsClassification revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c split: test type: mteb/toxic_conversations_50k metrics: - type: accuracy value: 69.3874 - type: ap value: 13.829909072469423 - type: f1 value: 53.54534203543492 task: type: Classification - dataset: config: default name: MTEB TweetSentimentExtractionClassification revision: d604517c81ca91fe16a244d1248fc021f9ecee7a split: test type: mteb/tweet_sentiment_extraction metrics: - type: accuracy value: 62.62026032823995 - type: f1 value: 62.85251350485221 task: type: Classification - dataset: config: default name: MTEB TwentyNewsgroupsClustering revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 split: test type: mteb/twentynewsgroups-clustering metrics: - type: v_measure value: 33.21527881409797 task: type: Clustering - dataset: config: default name: MTEB TwitterSemEval2015 revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 split: test type: mteb/twittersemeval2015-pairclassification metrics: - type: cos_sim_accuracy value: 84.97943613280086 - type: cos_sim_ap value: 70.75454316885921 - type: cos_sim_f1 value: 65.38274012676743 - type: cos_sim_precision value: 60.761214318078835 - type: cos_sim_recall value: 70.76517150395777 - type: dot_accuracy value: 79.0546581629612 - type: dot_ap value: 47.3197121792147 - type: dot_f1 value: 49.20106524633821 - type: dot_precision value: 42.45499808502489 - type: dot_recall value: 58.49604221635884 - type: euclidean_accuracy value: 85.08076533349228 - type: euclidean_ap value: 70.95016106374474 - type: euclidean_f1 value: 65.43987900176455 - type: euclidean_precision value: 62.64478764478765 - type: euclidean_recall value: 68.49604221635884 - type: manhattan_accuracy value: 84.93771234428085 - type: manhattan_ap value: 70.63668388755362 - type: manhattan_f1 value: 65.23895401262398 - type: manhattan_precision value: 56.946084218811485 - type: manhattan_recall value: 76.35883905013192 - type: max_accuracy value: 85.08076533349228 - type: max_ap value: 70.95016106374474 - type: max_f1 value: 65.43987900176455 task: type: PairClassification - dataset: config: default name: MTEB TwitterURLCorpus revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf split: test type: mteb/twitterurlcorpus-pairclassification metrics: - type: cos_sim_accuracy value: 88.69096130709822 - type: cos_sim_ap value: 84.82526278228542 - type: cos_sim_f1 value: 77.65485060585536 - type: cos_sim_precision value: 75.94582658619167 - type: cos_sim_recall value: 79.44256236526024 - type: dot_accuracy value: 80.97954748321496 - type: dot_ap value: 64.81642914145866 - type: dot_f1 value: 60.631996987229975 - type: dot_precision value: 54.5897293631712 - type: dot_recall value: 68.17831844779796 - type: euclidean_accuracy value: 88.6987231730508 - type: euclidean_ap value: 84.80003825477253 - type: euclidean_f1 value: 77.67194179854496 - type: euclidean_precision value: 75.7128235122094 - type: euclidean_recall value: 79.73514012935017 - type: manhattan_accuracy value: 88.62692591298949 - type: manhattan_ap value: 84.80451408255276 - type: manhattan_f1 value: 77.69888949572183 - type: manhattan_precision value: 73.70311528631622 - type: manhattan_recall value: 82.15275639051433 - type: max_accuracy value: 88.6987231730508 - type: max_ap value: 84.82526278228542 - type: max_f1 value: 77.69888949572183 task: type: PairClassification - dataset: config: ru-en name: MTEB BUCC.v2 (ru-en) revision: 1739dc11ffe9b7bfccd7f3d585aeb4c544fc6677 split: test type: mteb/bucc-bitext-mining metrics: - type: accuracy value: 95.72566678212678 - type: f1 value: 94.42443135896548 - type: main_score value: 94.42443135896548 - type: precision value: 93.80868260016165 - type: recall value: 95.72566678212678 task: type: BitextMining - dataset: config: rus_Cyrl-rus_Cyrl name: MTEB BelebeleRetrieval (rus_Cyrl-rus_Cyrl) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 92.23599999999999 - type: map_at_1 value: 87.111 - type: map_at_10 value: 90.717 - type: map_at_100 value: 90.879 - type: map_at_1000 value: 90.881 - type: map_at_20 value: 90.849 - type: map_at_3 value: 90.074 - type: map_at_5 value: 90.535 - type: mrr_at_1 value: 87.1111111111111 - type: mrr_at_10 value: 90.7173721340388 - type: mrr_at_100 value: 90.87859682638407 - type: mrr_at_1000 value: 90.88093553612326 - type: mrr_at_20 value: 90.84863516113515 - type: mrr_at_3 value: 90.07407407407409 - type: mrr_at_5 value: 90.53518518518521 - type: nauc_map_at_1000_diff1 value: 92.37373187280554 - type: nauc_map_at_1000_max value: 79.90465445423249 - type: nauc_map_at_1000_std value: -0.6220290556185463 - type: nauc_map_at_100_diff1 value: 92.37386697345335 - type: nauc_map_at_100_max value: 79.90991577223959 - type: nauc_map_at_100_std value: -0.602247514642845 - type: nauc_map_at_10_diff1 value: 92.30907447072467 - type: nauc_map_at_10_max value: 79.86831935337598 - type: nauc_map_at_10_std value: -0.7455191860719699 - type: nauc_map_at_1_diff1 value: 93.29828518358822 - type: nauc_map_at_1_max value: 78.69539619887887 - type: nauc_map_at_1_std value: -4.097150817605763 - type: nauc_map_at_20_diff1 value: 92.38414149703077 - type: nauc_map_at_20_max value: 79.94789814504661 - type: nauc_map_at_20_std value: -0.3928031130400773 - type: nauc_map_at_3_diff1 value: 92.21688899306734 - type: nauc_map_at_3_max value: 80.34586671780885 - type: nauc_map_at_3_std value: 0.24088319695435909 - type: nauc_map_at_5_diff1 value: 92.27931726042982 - type: nauc_map_at_5_max value: 79.99198834003367 - type: nauc_map_at_5_std value: -0.6296366922840796 - type: nauc_mrr_at_1000_diff1 value: 92.37373187280554 - type: nauc_mrr_at_1000_max value: 79.90465445423249 - type: nauc_mrr_at_1000_std value: -0.6220290556185463 - type: nauc_mrr_at_100_diff1 value: 92.37386697345335 - type: nauc_mrr_at_100_max value: 79.90991577223959 - type: nauc_mrr_at_100_std value: -0.602247514642845 - type: nauc_mrr_at_10_diff1 value: 92.30907447072467 - type: nauc_mrr_at_10_max value: 79.86831935337598 - type: nauc_mrr_at_10_std value: -0.7455191860719699 - type: nauc_mrr_at_1_diff1 value: 93.29828518358822 - type: nauc_mrr_at_1_max value: 78.69539619887887 - type: nauc_mrr_at_1_std value: -4.097150817605763 - type: nauc_mrr_at_20_diff1 value: 92.38414149703077 - type: nauc_mrr_at_20_max value: 79.94789814504661 - type: nauc_mrr_at_20_std value: -0.3928031130400773 - type: nauc_mrr_at_3_diff1 value: 92.21688899306734 - type: nauc_mrr_at_3_max value: 80.34586671780885 - type: nauc_mrr_at_3_std value: 0.24088319695435909 - type: nauc_mrr_at_5_diff1 value: 92.27931726042982 - type: nauc_mrr_at_5_max value: 79.99198834003367 - type: nauc_mrr_at_5_std value: -0.6296366922840796 - type: nauc_ndcg_at_1000_diff1 value: 92.30526497646306 - type: nauc_ndcg_at_1000_max value: 80.12734537480418 - type: nauc_ndcg_at_1000_std value: 0.22849408935578744 - type: nauc_ndcg_at_100_diff1 value: 92.31347123202318 - type: nauc_ndcg_at_100_max value: 80.29207038703142 - type: nauc_ndcg_at_100_std value: 0.816825944406239 - type: nauc_ndcg_at_10_diff1 value: 92.05430189845808 - type: nauc_ndcg_at_10_max value: 80.16515667442968 - type: nauc_ndcg_at_10_std value: 0.7486447532544893 - type: nauc_ndcg_at_1_diff1 value: 93.29828518358822 - type: nauc_ndcg_at_1_max value: 78.69539619887887 - type: nauc_ndcg_at_1_std value: -4.097150817605763 - type: nauc_ndcg_at_20_diff1 value: 92.40147868825079 - type: nauc_ndcg_at_20_max value: 80.5117307181802 - type: nauc_ndcg_at_20_std value: 2.0431351539517033 - type: nauc_ndcg_at_3_diff1 value: 91.88894444422789 - type: nauc_ndcg_at_3_max value: 81.09256084196045 - type: nauc_ndcg_at_3_std value: 2.422705909643621 - type: nauc_ndcg_at_5_diff1 value: 91.99711052955728 - type: nauc_ndcg_at_5_max value: 80.46996334573979 - type: nauc_ndcg_at_5_std value: 0.9086986899040708 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 93.46405228758012 - type: nauc_precision_at_100_max value: 100.0 - type: nauc_precision_at_100_std value: 70.71661998132774 - type: nauc_precision_at_10_diff1 value: 90.13938908896874 - type: nauc_precision_at_10_max value: 82.21121782046167 - type: nauc_precision_at_10_std value: 13.075230092036083 - type: nauc_precision_at_1_diff1 value: 93.29828518358822 - type: nauc_precision_at_1_max value: 78.69539619887887 - type: nauc_precision_at_1_std value: -4.097150817605763 - type: nauc_precision_at_20_diff1 value: 94.9723479135242 - type: nauc_precision_at_20_max value: 91.04000574588684 - type: nauc_precision_at_20_std value: 48.764634058749586 - type: nauc_precision_at_3_diff1 value: 90.52690041533852 - type: nauc_precision_at_3_max value: 84.35075179497126 - type: nauc_precision_at_3_std value: 12.036768730480507 - type: nauc_precision_at_5_diff1 value: 90.44234360410769 - type: nauc_precision_at_5_max value: 83.21895424836558 - type: nauc_precision_at_5_std value: 9.974323062558037 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 93.46405228758294 - type: nauc_recall_at_100_max value: 100.0 - type: nauc_recall_at_100_std value: 70.71661998132666 - type: nauc_recall_at_10_diff1 value: 90.13938908896864 - type: nauc_recall_at_10_max value: 82.21121782046124 - type: nauc_recall_at_10_std value: 13.075230092036506 - type: nauc_recall_at_1_diff1 value: 93.29828518358822 - type: nauc_recall_at_1_max value: 78.69539619887887 - type: nauc_recall_at_1_std value: -4.097150817605763 - type: nauc_recall_at_20_diff1 value: 94.97234791352489 - type: nauc_recall_at_20_max value: 91.04000574588774 - type: nauc_recall_at_20_std value: 48.764634058752065 - type: nauc_recall_at_3_diff1 value: 90.52690041533845 - type: nauc_recall_at_3_max value: 84.35075179497079 - type: nauc_recall_at_3_std value: 12.036768730480583 - type: nauc_recall_at_5_diff1 value: 90.44234360410861 - type: nauc_recall_at_5_max value: 83.21895424836595 - type: nauc_recall_at_5_std value: 9.974323062558147 - type: ndcg_at_1 value: 87.111 - type: ndcg_at_10 value: 92.23599999999999 - type: ndcg_at_100 value: 92.87100000000001 - type: ndcg_at_1000 value: 92.928 - type: ndcg_at_20 value: 92.67699999999999 - type: ndcg_at_3 value: 90.973 - type: ndcg_at_5 value: 91.801 - type: precision_at_1 value: 87.111 - type: precision_at_10 value: 9.689 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.928 - type: precision_at_3 value: 31.185000000000002 - type: precision_at_5 value: 19.111 - type: recall_at_1 value: 87.111 - type: recall_at_10 value: 96.88900000000001 - type: recall_at_100 value: 99.556 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 98.556 - type: recall_at_3 value: 93.556 - type: recall_at_5 value: 95.556 task: type: Retrieval - dataset: config: rus_Cyrl-eng_Latn name: MTEB BelebeleRetrieval (rus_Cyrl-eng_Latn) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 86.615 - type: map_at_1 value: 78.0 - type: map_at_10 value: 83.822 - type: map_at_100 value: 84.033 - type: map_at_1000 value: 84.03500000000001 - type: map_at_20 value: 83.967 - type: map_at_3 value: 82.315 - type: map_at_5 value: 83.337 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 83.82213403880073 - type: mrr_at_100 value: 84.03281327810801 - type: mrr_at_1000 value: 84.03460051000452 - type: mrr_at_20 value: 83.9673773122303 - type: mrr_at_3 value: 82.31481481481484 - type: mrr_at_5 value: 83.33703703703708 - type: nauc_map_at_1000_diff1 value: 80.78467576987832 - type: nauc_map_at_1000_max value: 51.41718334647604 - type: nauc_map_at_1000_std value: -16.23873782768812 - type: nauc_map_at_100_diff1 value: 80.78490931240695 - type: nauc_map_at_100_max value: 51.41504597713061 - type: nauc_map_at_100_std value: -16.23538559475366 - type: nauc_map_at_10_diff1 value: 80.73989245374868 - type: nauc_map_at_10_max value: 51.43026079433827 - type: nauc_map_at_10_std value: -16.13414330905897 - type: nauc_map_at_1_diff1 value: 82.36966971144186 - type: nauc_map_at_1_max value: 52.988877039509916 - type: nauc_map_at_1_std value: -15.145824639495546 - type: nauc_map_at_20_diff1 value: 80.75923781626145 - type: nauc_map_at_20_max value: 51.40181079374639 - type: nauc_map_at_20_std value: -16.260566097377165 - type: nauc_map_at_3_diff1 value: 80.65242627065471 - type: nauc_map_at_3_max value: 50.623980338841214 - type: nauc_map_at_3_std value: -16.818343442794294 - type: nauc_map_at_5_diff1 value: 80.45976387021862 - type: nauc_map_at_5_max value: 51.533621728445866 - type: nauc_map_at_5_std value: -16.279891536945815 - type: nauc_mrr_at_1000_diff1 value: 80.78467576987832 - type: nauc_mrr_at_1000_max value: 51.41718334647604 - type: nauc_mrr_at_1000_std value: -16.23873782768812 - type: nauc_mrr_at_100_diff1 value: 80.78490931240695 - type: nauc_mrr_at_100_max value: 51.41504597713061 - type: nauc_mrr_at_100_std value: -16.23538559475366 - type: nauc_mrr_at_10_diff1 value: 80.73989245374868 - type: nauc_mrr_at_10_max value: 51.43026079433827 - type: nauc_mrr_at_10_std value: -16.13414330905897 - type: nauc_mrr_at_1_diff1 value: 82.36966971144186 - type: nauc_mrr_at_1_max value: 52.988877039509916 - type: nauc_mrr_at_1_std value: -15.145824639495546 - type: nauc_mrr_at_20_diff1 value: 80.75923781626145 - type: nauc_mrr_at_20_max value: 51.40181079374639 - type: nauc_mrr_at_20_std value: -16.260566097377165 - type: nauc_mrr_at_3_diff1 value: 80.65242627065471 - type: nauc_mrr_at_3_max value: 50.623980338841214 - type: nauc_mrr_at_3_std value: -16.818343442794294 - type: nauc_mrr_at_5_diff1 value: 80.45976387021862 - type: nauc_mrr_at_5_max value: 51.533621728445866 - type: nauc_mrr_at_5_std value: -16.279891536945815 - type: nauc_ndcg_at_1000_diff1 value: 80.60009446938174 - type: nauc_ndcg_at_1000_max value: 51.381708043594166 - type: nauc_ndcg_at_1000_std value: -16.054256944160848 - type: nauc_ndcg_at_100_diff1 value: 80.58971462930421 - type: nauc_ndcg_at_100_max value: 51.25436917735444 - type: nauc_ndcg_at_100_std value: -15.862944972269894 - type: nauc_ndcg_at_10_diff1 value: 80.37967179454489 - type: nauc_ndcg_at_10_max value: 51.590394257251006 - type: nauc_ndcg_at_10_std value: -15.489799384799591 - type: nauc_ndcg_at_1_diff1 value: 82.36966971144186 - type: nauc_ndcg_at_1_max value: 52.988877039509916 - type: nauc_ndcg_at_1_std value: -15.145824639495546 - type: nauc_ndcg_at_20_diff1 value: 80.40299527470081 - type: nauc_ndcg_at_20_max value: 51.395132284307074 - type: nauc_ndcg_at_20_std value: -15.906165526937203 - type: nauc_ndcg_at_3_diff1 value: 80.10347913649302 - type: nauc_ndcg_at_3_max value: 50.018431855573844 - type: nauc_ndcg_at_3_std value: -17.12743750163884 - type: nauc_ndcg_at_5_diff1 value: 79.65918647776613 - type: nauc_ndcg_at_5_max value: 51.76710880330806 - type: nauc_ndcg_at_5_std value: -16.071901882035945 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 77.41596638655459 - type: nauc_precision_at_100_max value: 22.572362278246565 - type: nauc_precision_at_100_std value: 26.890756302525716 - type: nauc_precision_at_10_diff1 value: 77.82112845138009 - type: nauc_precision_at_10_max value: 54.2550353474723 - type: nauc_precision_at_10_std value: -7.492997198879646 - type: nauc_precision_at_1_diff1 value: 82.36966971144186 - type: nauc_precision_at_1_max value: 52.988877039509916 - type: nauc_precision_at_1_std value: -15.145824639495546 - type: nauc_precision_at_20_diff1 value: 75.89091192032318 - type: nauc_precision_at_20_max value: 52.03275754746293 - type: nauc_precision_at_20_std value: -7.8411920323686175 - type: nauc_precision_at_3_diff1 value: 78.0256020644638 - type: nauc_precision_at_3_max value: 47.80353641248523 - type: nauc_precision_at_3_std value: -18.181625255723503 - type: nauc_precision_at_5_diff1 value: 75.21583976056174 - type: nauc_precision_at_5_max value: 53.716281032960765 - type: nauc_precision_at_5_std value: -14.411700753360812 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 77.4159663865523 - type: nauc_recall_at_100_max value: 22.57236227824646 - type: nauc_recall_at_100_std value: 26.89075630252133 - type: nauc_recall_at_10_diff1 value: 77.82112845138037 - type: nauc_recall_at_10_max value: 54.25503534747204 - type: nauc_recall_at_10_std value: -7.492997198879666 - type: nauc_recall_at_1_diff1 value: 82.36966971144186 - type: nauc_recall_at_1_max value: 52.988877039509916 - type: nauc_recall_at_1_std value: -15.145824639495546 - type: nauc_recall_at_20_diff1 value: 75.89091192032362 - type: nauc_recall_at_20_max value: 52.032757547463184 - type: nauc_recall_at_20_std value: -7.84119203236888 - type: nauc_recall_at_3_diff1 value: 78.02560206446354 - type: nauc_recall_at_3_max value: 47.80353641248526 - type: nauc_recall_at_3_std value: -18.181625255723656 - type: nauc_recall_at_5_diff1 value: 75.21583976056185 - type: nauc_recall_at_5_max value: 53.71628103296118 - type: nauc_recall_at_5_std value: -14.411700753360634 - type: ndcg_at_1 value: 78.0 - type: ndcg_at_10 value: 86.615 - type: ndcg_at_100 value: 87.558 - type: ndcg_at_1000 value: 87.613 - type: ndcg_at_20 value: 87.128 - type: ndcg_at_3 value: 83.639 - type: ndcg_at_5 value: 85.475 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.867 - type: precision_at_3 value: 29.148000000000003 - type: precision_at_5 value: 18.378 - type: recall_at_1 value: 78.0 - type: recall_at_10 value: 95.333 - type: recall_at_100 value: 99.556 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 97.333 - type: recall_at_3 value: 87.444 - type: recall_at_5 value: 91.889 task: type: Retrieval - dataset: config: eng_Latn-rus_Cyrl name: MTEB BelebeleRetrieval (eng_Latn-rus_Cyrl) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 82.748 - type: map_at_1 value: 73.444 - type: map_at_10 value: 79.857 - type: map_at_100 value: 80.219 - type: map_at_1000 value: 80.22500000000001 - type: map_at_20 value: 80.10300000000001 - type: map_at_3 value: 78.593 - type: map_at_5 value: 79.515 - type: mrr_at_1 value: 73.44444444444444 - type: mrr_at_10 value: 79.85705467372136 - type: mrr_at_100 value: 80.21942320422542 - type: mrr_at_1000 value: 80.2245364027152 - type: mrr_at_20 value: 80.10273201266493 - type: mrr_at_3 value: 78.59259259259258 - type: mrr_at_5 value: 79.51481481481483 - type: nauc_map_at_1000_diff1 value: 83.69682652271125 - type: nauc_map_at_1000_max value: 61.70131708044767 - type: nauc_map_at_1000_std value: 9.345825405274955 - type: nauc_map_at_100_diff1 value: 83.68924820523492 - type: nauc_map_at_100_max value: 61.6965735573098 - type: nauc_map_at_100_std value: 9.366132859525775 - type: nauc_map_at_10_diff1 value: 83.61802964269985 - type: nauc_map_at_10_max value: 61.74274476167882 - type: nauc_map_at_10_std value: 9.504060995819101 - type: nauc_map_at_1_diff1 value: 86.37079221403225 - type: nauc_map_at_1_max value: 61.856861655370686 - type: nauc_map_at_1_std value: 4.708911881992707 - type: nauc_map_at_20_diff1 value: 83.62920965453047 - type: nauc_map_at_20_max value: 61.761029350326965 - type: nauc_map_at_20_std value: 9.572978651118351 - type: nauc_map_at_3_diff1 value: 83.66665673154306 - type: nauc_map_at_3_max value: 61.13597610587937 - type: nauc_map_at_3_std value: 9.309596395240598 - type: nauc_map_at_5_diff1 value: 83.52307226455358 - type: nauc_map_at_5_max value: 61.59405758027573 - type: nauc_map_at_5_std value: 9.320025423287671 - type: nauc_mrr_at_1000_diff1 value: 83.69682652271125 - type: nauc_mrr_at_1000_max value: 61.70131708044767 - type: nauc_mrr_at_1000_std value: 9.345825405274955 - type: nauc_mrr_at_100_diff1 value: 83.68924820523492 - type: nauc_mrr_at_100_max value: 61.6965735573098 - type: nauc_mrr_at_100_std value: 9.366132859525775 - type: nauc_mrr_at_10_diff1 value: 83.61802964269985 - type: nauc_mrr_at_10_max value: 61.74274476167882 - type: nauc_mrr_at_10_std value: 9.504060995819101 - type: nauc_mrr_at_1_diff1 value: 86.37079221403225 - type: nauc_mrr_at_1_max value: 61.856861655370686 - type: nauc_mrr_at_1_std value: 4.708911881992707 - type: nauc_mrr_at_20_diff1 value: 83.62920965453047 - type: nauc_mrr_at_20_max value: 61.761029350326965 - type: nauc_mrr_at_20_std value: 9.572978651118351 - type: nauc_mrr_at_3_diff1 value: 83.66665673154306 - type: nauc_mrr_at_3_max value: 61.13597610587937 - type: nauc_mrr_at_3_std value: 9.309596395240598 - type: nauc_mrr_at_5_diff1 value: 83.52307226455358 - type: nauc_mrr_at_5_max value: 61.59405758027573 - type: nauc_mrr_at_5_std value: 9.320025423287671 - type: nauc_ndcg_at_1000_diff1 value: 83.24213186482201 - type: nauc_ndcg_at_1000_max value: 61.77629841787496 - type: nauc_ndcg_at_1000_std value: 10.332527869705851 - type: nauc_ndcg_at_100_diff1 value: 83.06815820441027 - type: nauc_ndcg_at_100_max value: 61.6947181864579 - type: nauc_ndcg_at_100_std value: 10.888922975877316 - type: nauc_ndcg_at_10_diff1 value: 82.58238431386295 - type: nauc_ndcg_at_10_max value: 62.10333663935709 - type: nauc_ndcg_at_10_std value: 11.746030330958174 - type: nauc_ndcg_at_1_diff1 value: 86.37079221403225 - type: nauc_ndcg_at_1_max value: 61.856861655370686 - type: nauc_ndcg_at_1_std value: 4.708911881992707 - type: nauc_ndcg_at_20_diff1 value: 82.67888324480154 - type: nauc_ndcg_at_20_max value: 62.28124917486516 - type: nauc_ndcg_at_20_std value: 12.343058917563914 - type: nauc_ndcg_at_3_diff1 value: 82.71277373710663 - type: nauc_ndcg_at_3_max value: 60.66677922989939 - type: nauc_ndcg_at_3_std value: 10.843633736296528 - type: nauc_ndcg_at_5_diff1 value: 82.34691124846786 - type: nauc_ndcg_at_5_max value: 61.605961382062716 - type: nauc_ndcg_at_5_std value: 11.129011077702602 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 60.93103908230194 - type: nauc_precision_at_100_max value: 52.621048419370695 - type: nauc_precision_at_100_std value: 85.60090702947922 - type: nauc_precision_at_10_diff1 value: 76.26517273576093 - type: nauc_precision_at_10_max value: 65.2013694366636 - type: nauc_precision_at_10_std value: 26.50357920946173 - type: nauc_precision_at_1_diff1 value: 86.37079221403225 - type: nauc_precision_at_1_max value: 61.856861655370686 - type: nauc_precision_at_1_std value: 4.708911881992707 - type: nauc_precision_at_20_diff1 value: 73.47946930710295 - type: nauc_precision_at_20_max value: 70.19520986689217 - type: nauc_precision_at_20_std value: 45.93186111653967 - type: nauc_precision_at_3_diff1 value: 79.02026879450186 - type: nauc_precision_at_3_max value: 58.75074624692399 - type: nauc_precision_at_3_std value: 16.740684654251037 - type: nauc_precision_at_5_diff1 value: 76.47585662281637 - type: nauc_precision_at_5_max value: 61.86270922013127 - type: nauc_precision_at_5_std value: 20.1833625455035 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 60.93103908229921 - type: nauc_recall_at_100_max value: 52.62104841936668 - type: nauc_recall_at_100_std value: 85.60090702947748 - type: nauc_recall_at_10_diff1 value: 76.26517273576097 - type: nauc_recall_at_10_max value: 65.20136943666347 - type: nauc_recall_at_10_std value: 26.50357920946174 - type: nauc_recall_at_1_diff1 value: 86.37079221403225 - type: nauc_recall_at_1_max value: 61.856861655370686 - type: nauc_recall_at_1_std value: 4.708911881992707 - type: nauc_recall_at_20_diff1 value: 73.47946930710269 - type: nauc_recall_at_20_max value: 70.19520986689254 - type: nauc_recall_at_20_std value: 45.93186111653943 - type: nauc_recall_at_3_diff1 value: 79.02026879450173 - type: nauc_recall_at_3_max value: 58.750746246923924 - type: nauc_recall_at_3_std value: 16.740684654251076 - type: nauc_recall_at_5_diff1 value: 76.4758566228162 - type: nauc_recall_at_5_max value: 61.862709220131386 - type: nauc_recall_at_5_std value: 20.18336254550361 - type: ndcg_at_1 value: 73.444 - type: ndcg_at_10 value: 82.748 - type: ndcg_at_100 value: 84.416 - type: ndcg_at_1000 value: 84.52300000000001 - type: ndcg_at_20 value: 83.646 - type: ndcg_at_3 value: 80.267 - type: ndcg_at_5 value: 81.922 - type: precision_at_1 value: 73.444 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.761 - type: precision_at_3 value: 28.37 - type: precision_at_5 value: 17.822 - type: recall_at_1 value: 73.444 - type: recall_at_10 value: 91.667 - type: recall_at_100 value: 99.222 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 95.222 - type: recall_at_3 value: 85.111 - type: recall_at_5 value: 89.11099999999999 task: type: Retrieval - dataset: config: eng_Latn-rus_Cyrl name: MTEB BibleNLPBitextMining (eng_Latn-rus_Cyrl) revision: 264a18480c529d9e922483839b4b9758e690b762 split: train type: davidstap/biblenlp-corpus-mmteb metrics: - type: accuracy value: 96.875 - type: f1 value: 95.83333333333333 - type: main_score value: 95.83333333333333 - type: precision value: 95.3125 - type: recall value: 96.875 task: type: BitextMining - dataset: config: rus_Cyrl-eng_Latn name: MTEB BibleNLPBitextMining (rus_Cyrl-eng_Latn) revision: 264a18480c529d9e922483839b4b9758e690b762 split: train type: davidstap/biblenlp-corpus-mmteb metrics: - type: accuracy value: 88.671875 - type: f1 value: 85.3515625 - type: main_score value: 85.3515625 - type: precision value: 83.85416666666667 - type: recall value: 88.671875 task: type: BitextMining - dataset: config: default name: MTEB CEDRClassification (default) revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4 split: test type: ai-forever/cedr-classification metrics: - type: accuracy value: 40.06907545164719 - type: f1 value: 26.285000550712407 - type: lrap value: 64.4280021253997 - type: main_score value: 40.06907545164719 task: type: MultilabelClassification - dataset: config: default name: MTEB CyrillicTurkicLangClassification (default) revision: e42d330f33d65b7b72dfd408883daf1661f06f18 split: test type: tatiana-merz/cyrillic_turkic_langs metrics: - type: accuracy value: 43.3447265625 - type: f1 value: 40.08400146827895 - type: f1_weighted value: 40.08499428040896 - type: main_score value: 43.3447265625 task: type: Classification - dataset: config: ace_Arab-rus_Cyrl name: MTEB FloresBitextMining (ace_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 6.225296442687747 - type: f1 value: 5.5190958860075 - type: main_score value: 5.5190958860075 - type: precision value: 5.3752643758000005 - type: recall value: 6.225296442687747 task: type: BitextMining - dataset: config: bam_Latn-rus_Cyrl name: MTEB FloresBitextMining (bam_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.37944664031622 - type: f1 value: 64.54819836666252 - type: main_score value: 64.54819836666252 - type: precision value: 63.07479233454916 - type: recall value: 68.37944664031622 task: type: BitextMining - dataset: config: dzo_Tibt-rus_Cyrl name: MTEB FloresBitextMining (dzo_Tibt-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 0.09881422924901186 - type: f1 value: 0.00019509225912934226 - type: main_score value: 0.00019509225912934226 - type: precision value: 9.76425190207627e-05 - type: recall value: 0.09881422924901186 task: type: BitextMining - dataset: config: hin_Deva-rus_Cyrl name: MTEB FloresBitextMining (hin_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.47299077733861 - type: main_score value: 99.47299077733861 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: khm_Khmr-rus_Cyrl name: MTEB FloresBitextMining (khm_Khmr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.83399209486166 - type: f1 value: 87.71151056318254 - type: main_score value: 87.71151056318254 - type: precision value: 87.32012500709193 - type: recall value: 88.83399209486166 task: type: BitextMining - dataset: config: mag_Deva-rus_Cyrl name: MTEB FloresBitextMining (mag_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.7239789196311 - type: main_score value: 97.7239789196311 - type: precision value: 97.61904761904762 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: pap_Latn-rus_Cyrl name: MTEB FloresBitextMining (pap_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.68187806922984 - type: main_score value: 93.68187806922984 - type: precision value: 93.58925452707051 - type: recall value: 94.0711462450593 task: type: BitextMining - dataset: config: sot_Latn-rus_Cyrl name: MTEB FloresBitextMining (sot_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 90.9090909090909 - type: f1 value: 89.23171936758892 - type: main_score value: 89.23171936758892 - type: precision value: 88.51790014083866 - type: recall value: 90.9090909090909 task: type: BitextMining - dataset: config: tur_Latn-rus_Cyrl name: MTEB FloresBitextMining (tur_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: ace_Latn-rus_Cyrl name: MTEB FloresBitextMining (ace_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 66.10671936758892 - type: f1 value: 63.81888256297873 - type: main_score value: 63.81888256297873 - type: precision value: 63.01614067933451 - type: recall value: 66.10671936758892 task: type: BitextMining - dataset: config: ban_Latn-rus_Cyrl name: MTEB FloresBitextMining (ban_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 79.44664031620553 - type: f1 value: 77.6311962082713 - type: main_score value: 77.6311962082713 - type: precision value: 76.93977931929739 - type: recall value: 79.44664031620553 task: type: BitextMining - dataset: config: ell_Grek-rus_Cyrl name: MTEB FloresBitextMining (ell_Grek-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: hne_Deva-rus_Cyrl name: MTEB FloresBitextMining (hne_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 96.25352907961603 - type: main_score value: 96.25352907961603 - type: precision value: 96.02155091285526 - type: recall value: 96.83794466403161 task: type: BitextMining - dataset: config: kik_Latn-rus_Cyrl name: MTEB FloresBitextMining (kik_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 76.28458498023716 - type: f1 value: 73.5596919895859 - type: main_score value: 73.5596919895859 - type: precision value: 72.40900759055246 - type: recall value: 76.28458498023716 task: type: BitextMining - dataset: config: mai_Deva-rus_Cyrl name: MTEB FloresBitextMining (mai_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.72727272727273 - type: f1 value: 97.37812911725956 - type: main_score value: 97.37812911725956 - type: precision value: 97.26002258610953 - type: recall value: 97.72727272727273 task: type: BitextMining - dataset: config: pbt_Arab-rus_Cyrl name: MTEB FloresBitextMining (pbt_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.34700387331966 - type: main_score value: 93.34700387331966 - type: precision value: 93.06920556920556 - type: recall value: 94.0711462450593 task: type: BitextMining - dataset: config: spa_Latn-rus_Cyrl name: MTEB FloresBitextMining (spa_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: twi_Latn-rus_Cyrl name: MTEB FloresBitextMining (twi_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 77.77434363246721 - type: main_score value: 77.77434363246721 - type: precision value: 76.54444287596462 - type: recall value: 80.73122529644269 task: type: BitextMining - dataset: config: acm_Arab-rus_Cyrl name: MTEB FloresBitextMining (acm_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.56521739130434 - type: f1 value: 92.92490118577075 - type: main_score value: 92.92490118577075 - type: precision value: 92.16897233201581 - type: recall value: 94.56521739130434 task: type: BitextMining - dataset: config: bel_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (bel_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.98550724637681 - type: main_score value: 98.98550724637681 - type: precision value: 98.88833992094862 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: eng_Latn-rus_Cyrl name: MTEB FloresBitextMining (eng_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: hrv_Latn-rus_Cyrl name: MTEB FloresBitextMining (hrv_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 99.05138339920948 - type: main_score value: 99.05138339920948 - type: precision value: 99.00691699604744 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: kin_Latn-rus_Cyrl name: MTEB FloresBitextMining (kin_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.2411067193676 - type: f1 value: 86.5485246227658 - type: main_score value: 86.5485246227658 - type: precision value: 85.90652101521667 - type: recall value: 88.2411067193676 task: type: BitextMining - dataset: config: mal_Mlym-rus_Cyrl name: MTEB FloresBitextMining (mal_Mlym-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.07971014492753 - type: main_score value: 98.07971014492753 - type: precision value: 97.88372859025033 - type: recall value: 98.51778656126481 task: type: BitextMining - dataset: config: pes_Arab-rus_Cyrl name: MTEB FloresBitextMining (pes_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.0566534914361 - type: main_score value: 98.0566534914361 - type: precision value: 97.82608695652173 - type: recall value: 98.51778656126481 task: type: BitextMining - dataset: config: srd_Latn-rus_Cyrl name: MTEB FloresBitextMining (srd_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.6086956521739 - type: f1 value: 80.9173470979821 - type: main_score value: 80.9173470979821 - type: precision value: 80.24468672882627 - type: recall value: 82.6086956521739 task: type: BitextMining - dataset: config: tzm_Tfng-rus_Cyrl name: MTEB FloresBitextMining (tzm_Tfng-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 7.41106719367589 - type: f1 value: 6.363562740945329 - type: main_score value: 6.363562740945329 - type: precision value: 6.090373175353411 - type: recall value: 7.41106719367589 task: type: BitextMining - dataset: config: acq_Arab-rus_Cyrl name: MTEB FloresBitextMining (acq_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.25691699604744 - type: f1 value: 93.81422924901187 - type: main_score value: 93.81422924901187 - type: precision value: 93.14064558629775 - type: recall value: 95.25691699604744 task: type: BitextMining - dataset: config: bem_Latn-rus_Cyrl name: MTEB FloresBitextMining (bem_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.08300395256917 - type: f1 value: 65.01368772860867 - type: main_score value: 65.01368772860867 - type: precision value: 63.91052337510628 - type: recall value: 68.08300395256917 task: type: BitextMining - dataset: config: epo_Latn-rus_Cyrl name: MTEB FloresBitextMining (epo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.17193675889328 - type: main_score value: 98.17193675889328 - type: precision value: 98.08210564139418 - type: recall value: 98.41897233201581 task: type: BitextMining - dataset: config: hun_Latn-rus_Cyrl name: MTEB FloresBitextMining (hun_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.1106719367589 - type: main_score value: 99.1106719367589 - type: precision value: 99.01185770750988 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: kir_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (kir_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 97.07549806364035 - type: main_score value: 97.07549806364035 - type: precision value: 96.90958498023716 - type: recall value: 97.5296442687747 task: type: BitextMining - dataset: config: mar_Deva-rus_Cyrl name: MTEB FloresBitextMining (mar_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.44400527009222 - type: main_score value: 97.44400527009222 - type: precision value: 97.28966685488425 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: plt_Latn-rus_Cyrl name: MTEB FloresBitextMining (plt_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 79.9407114624506 - type: f1 value: 78.3154177760691 - type: main_score value: 78.3154177760691 - type: precision value: 77.69877344877344 - type: recall value: 79.9407114624506 task: type: BitextMining - dataset: config: srp_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (srp_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 task: type: BitextMining - dataset: config: uig_Arab-rus_Cyrl name: MTEB FloresBitextMining (uig_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.20158102766798 - type: f1 value: 81.44381923034585 - type: main_score value: 81.44381923034585 - type: precision value: 80.78813411582477 - type: recall value: 83.20158102766798 task: type: BitextMining - dataset: config: aeb_Arab-rus_Cyrl name: MTEB FloresBitextMining (aeb_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.20553359683794 - type: f1 value: 88.75352907961603 - type: main_score value: 88.75352907961603 - type: precision value: 87.64328063241106 - type: recall value: 91.20553359683794 task: type: BitextMining - dataset: config: ben_Beng-rus_Cyrl name: MTEB FloresBitextMining (ben_Beng-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.60671936758894 - type: main_score value: 98.60671936758894 - type: precision value: 98.4766139657444 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: est_Latn-rus_Cyrl name: MTEB FloresBitextMining (est_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.24505928853755 - type: f1 value: 95.27417027417027 - type: main_score value: 95.27417027417027 - type: precision value: 94.84107378129117 - type: recall value: 96.24505928853755 task: type: BitextMining - dataset: config: hye_Armn-rus_Cyrl name: MTEB FloresBitextMining (hye_Armn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.67786561264822 - type: main_score value: 97.67786561264822 - type: precision value: 97.55839022637441 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: kmb_Latn-rus_Cyrl name: MTEB FloresBitextMining (kmb_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 46.047430830039524 - type: f1 value: 42.94464804804471 - type: main_score value: 42.94464804804471 - type: precision value: 41.9851895607238 - type: recall value: 46.047430830039524 task: type: BitextMining - dataset: config: min_Arab-rus_Cyrl name: MTEB FloresBitextMining (min_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 3.9525691699604746 - type: f1 value: 3.402665192725756 - type: main_score value: 3.402665192725756 - type: precision value: 3.303787557740127 - type: recall value: 3.9525691699604746 task: type: BitextMining - dataset: config: pol_Latn-rus_Cyrl name: MTEB FloresBitextMining (pol_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: ssw_Latn-rus_Cyrl name: MTEB FloresBitextMining (ssw_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 73.22134387351778 - type: f1 value: 70.43086049508975 - type: main_score value: 70.43086049508975 - type: precision value: 69.35312022355656 - type: recall value: 73.22134387351778 task: type: BitextMining - dataset: config: ukr_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (ukr_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 task: type: BitextMining - dataset: config: afr_Latn-rus_Cyrl name: MTEB FloresBitextMining (afr_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: bho_Deva-rus_Cyrl name: MTEB FloresBitextMining (bho_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.12182382834557 - type: main_score value: 93.12182382834557 - type: precision value: 92.7523453232338 - type: recall value: 94.0711462450593 task: type: BitextMining - dataset: config: eus_Latn-rus_Cyrl name: MTEB FloresBitextMining (eus_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.19367588932806 - type: f1 value: 91.23604975587072 - type: main_score value: 91.23604975587072 - type: precision value: 90.86697443588663 - type: recall value: 92.19367588932806 task: type: BitextMining - dataset: config: ibo_Latn-rus_Cyrl name: MTEB FloresBitextMining (ibo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.21343873517787 - type: f1 value: 80.17901604858126 - type: main_score value: 80.17901604858126 - type: precision value: 79.3792284780028 - type: recall value: 82.21343873517787 task: type: BitextMining - dataset: config: kmr_Latn-rus_Cyrl name: MTEB FloresBitextMining (kmr_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.67588932806325 - type: f1 value: 66.72311714750278 - type: main_score value: 66.72311714750278 - type: precision value: 66.00178401554004 - type: recall value: 68.67588932806325 task: type: BitextMining - dataset: config: min_Latn-rus_Cyrl name: MTEB FloresBitextMining (min_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 78.65612648221344 - type: f1 value: 76.26592719972166 - type: main_score value: 76.26592719972166 - type: precision value: 75.39980459997484 - type: recall value: 78.65612648221344 task: type: BitextMining - dataset: config: por_Latn-rus_Cyrl name: MTEB FloresBitextMining (por_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 95.9669678147939 - type: main_score value: 95.9669678147939 - type: precision value: 95.59453227931488 - type: recall value: 96.83794466403161 task: type: BitextMining - dataset: config: sun_Latn-rus_Cyrl name: MTEB FloresBitextMining (sun_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 91.66553983773662 - type: main_score value: 91.66553983773662 - type: precision value: 91.34530928009188 - type: recall value: 92.4901185770751 task: type: BitextMining - dataset: config: umb_Latn-rus_Cyrl name: MTEB FloresBitextMining (umb_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 41.00790513833992 - type: f1 value: 38.21319326004483 - type: main_score value: 38.21319326004483 - type: precision value: 37.200655467675546 - type: recall value: 41.00790513833992 task: type: BitextMining - dataset: config: ajp_Arab-rus_Cyrl name: MTEB FloresBitextMining (ajp_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.35573122529645 - type: f1 value: 93.97233201581028 - type: main_score value: 93.97233201581028 - type: precision value: 93.33333333333333 - type: recall value: 95.35573122529645 task: type: BitextMining - dataset: config: bjn_Arab-rus_Cyrl name: MTEB FloresBitextMining (bjn_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 3.6561264822134385 - type: f1 value: 3.1071978056336484 - type: main_score value: 3.1071978056336484 - type: precision value: 3.0039741229718215 - type: recall value: 3.6561264822134385 task: type: BitextMining - dataset: config: ewe_Latn-rus_Cyrl name: MTEB FloresBitextMining (ewe_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 62.845849802371546 - type: f1 value: 59.82201175670472 - type: main_score value: 59.82201175670472 - type: precision value: 58.72629236362003 - type: recall value: 62.845849802371546 task: type: BitextMining - dataset: config: ilo_Latn-rus_Cyrl name: MTEB FloresBitextMining (ilo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.10276679841897 - type: f1 value: 80.75065288987582 - type: main_score value: 80.75065288987582 - type: precision value: 79.80726451662179 - type: recall value: 83.10276679841897 task: type: BitextMining - dataset: config: knc_Arab-rus_Cyrl name: MTEB FloresBitextMining (knc_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 10.079051383399209 - type: f1 value: 8.759282456080921 - type: main_score value: 8.759282456080921 - type: precision value: 8.474735138956142 - type: recall value: 10.079051383399209 task: type: BitextMining - dataset: config: mkd_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (mkd_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: prs_Arab-rus_Cyrl name: MTEB FloresBitextMining (prs_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: swe_Latn-rus_Cyrl name: MTEB FloresBitextMining (swe_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.22595520421606 - type: main_score value: 99.22595520421606 - type: precision value: 99.14361001317523 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: urd_Arab-rus_Cyrl name: MTEB FloresBitextMining (urd_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.25625823451911 - type: main_score value: 97.25625823451911 - type: precision value: 97.03063241106719 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: aka_Latn-rus_Cyrl name: MTEB FloresBitextMining (aka_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.22529644268775 - type: f1 value: 77.94307687941227 - type: main_score value: 77.94307687941227 - type: precision value: 76.58782793293665 - type: recall value: 81.22529644268775 task: type: BitextMining - dataset: config: bjn_Latn-rus_Cyrl name: MTEB FloresBitextMining (bjn_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.27667984189723 - type: f1 value: 83.6869192829922 - type: main_score value: 83.6869192829922 - type: precision value: 83.08670670691656 - type: recall value: 85.27667984189723 task: type: BitextMining - dataset: config: fao_Latn-rus_Cyrl name: MTEB FloresBitextMining (fao_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.9288537549407 - type: f1 value: 79.29806087454745 - type: main_score value: 79.29806087454745 - type: precision value: 78.71445871526987 - type: recall value: 80.9288537549407 task: type: BitextMining - dataset: config: ind_Latn-rus_Cyrl name: MTEB FloresBitextMining (ind_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.5296442687747 - type: main_score value: 97.5296442687747 - type: precision value: 97.23320158102767 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: knc_Latn-rus_Cyrl name: MTEB FloresBitextMining (knc_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 33.49802371541502 - type: f1 value: 32.02378215033989 - type: main_score value: 32.02378215033989 - type: precision value: 31.511356103747406 - type: recall value: 33.49802371541502 task: type: BitextMining - dataset: config: mlt_Latn-rus_Cyrl name: MTEB FloresBitextMining (mlt_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.40316205533597 - type: f1 value: 90.35317684386006 - type: main_score value: 90.35317684386006 - type: precision value: 89.94845939633488 - type: recall value: 91.40316205533597 task: type: BitextMining - dataset: config: quy_Latn-rus_Cyrl name: MTEB FloresBitextMining (quy_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 40.612648221343875 - type: f1 value: 38.74337544712602 - type: main_score value: 38.74337544712602 - type: precision value: 38.133716022178575 - type: recall value: 40.612648221343875 task: type: BitextMining - dataset: config: swh_Latn-rus_Cyrl name: MTEB FloresBitextMining (swh_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.13438735177866 - type: f1 value: 96.47435897435898 - type: main_score value: 96.47435897435898 - type: precision value: 96.18741765480895 - type: recall value: 97.13438735177866 task: type: BitextMining - dataset: config: uzn_Latn-rus_Cyrl name: MTEB FloresBitextMining (uzn_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 96.26355528529442 - type: main_score value: 96.26355528529442 - type: precision value: 96.0501756697409 - type: recall value: 96.83794466403161 task: type: BitextMining - dataset: config: als_Latn-rus_Cyrl name: MTEB FloresBitextMining (als_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.6907114624506 - type: main_score value: 98.6907114624506 - type: precision value: 98.6142480707698 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: bod_Tibt-rus_Cyrl name: MTEB FloresBitextMining (bod_Tibt-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 1.0869565217391304 - type: f1 value: 0.9224649610442628 - type: main_score value: 0.9224649610442628 - type: precision value: 0.8894275740459898 - type: recall value: 1.0869565217391304 task: type: BitextMining - dataset: config: fij_Latn-rus_Cyrl name: MTEB FloresBitextMining (fij_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 63.24110671936759 - type: f1 value: 60.373189068189525 - type: main_score value: 60.373189068189525 - type: precision value: 59.32326368115546 - type: recall value: 63.24110671936759 task: type: BitextMining - dataset: config: isl_Latn-rus_Cyrl name: MTEB FloresBitextMining (isl_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.03162055335969 - type: f1 value: 87.3102634715907 - type: main_score value: 87.3102634715907 - type: precision value: 86.65991814698712 - type: recall value: 89.03162055335969 task: type: BitextMining - dataset: config: kon_Latn-rus_Cyrl name: MTEB FloresBitextMining (kon_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 73.91304347826086 - type: f1 value: 71.518235523573 - type: main_score value: 71.518235523573 - type: precision value: 70.58714102449801 - type: recall value: 73.91304347826086 task: type: BitextMining - dataset: config: mni_Beng-rus_Cyrl name: MTEB FloresBitextMining (mni_Beng-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 29.545454545454547 - type: f1 value: 27.59513619889114 - type: main_score value: 27.59513619889114 - type: precision value: 26.983849851025344 - type: recall value: 29.545454545454547 task: type: BitextMining - dataset: config: ron_Latn-rus_Cyrl name: MTEB FloresBitextMining (ron_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: szl_Latn-rus_Cyrl name: MTEB FloresBitextMining (szl_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.26482213438736 - type: f1 value: 85.18912031587512 - type: main_score value: 85.18912031587512 - type: precision value: 84.77199409959775 - type: recall value: 86.26482213438736 task: type: BitextMining - dataset: config: vec_Latn-rus_Cyrl name: MTEB FloresBitextMining (vec_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.67193675889328 - type: f1 value: 84.62529734716581 - type: main_score value: 84.62529734716581 - type: precision value: 84.2611422440705 - type: recall value: 85.67193675889328 task: type: BitextMining - dataset: config: amh_Ethi-rus_Cyrl name: MTEB FloresBitextMining (amh_Ethi-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.76284584980237 - type: f1 value: 93.91735076517685 - type: main_score value: 93.91735076517685 - type: precision value: 93.57553798858147 - type: recall value: 94.76284584980237 task: type: BitextMining - dataset: config: bos_Latn-rus_Cyrl name: MTEB FloresBitextMining (bos_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 99.05655938264634 - type: main_score value: 99.05655938264634 - type: precision value: 99.01185770750988 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: fin_Latn-rus_Cyrl name: MTEB FloresBitextMining (fin_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.43741765480895 - type: main_score value: 97.43741765480895 - type: precision value: 97.1590909090909 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: ita_Latn-rus_Cyrl name: MTEB FloresBitextMining (ita_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 task: type: BitextMining - dataset: config: kor_Hang-rus_Cyrl name: MTEB FloresBitextMining (kor_Hang-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.49868247694334 - type: main_score value: 96.49868247694334 - type: precision value: 96.10507246376811 - type: recall value: 97.33201581027669 task: type: BitextMining - dataset: config: mos_Latn-rus_Cyrl name: MTEB FloresBitextMining (mos_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 34.683794466403164 - type: f1 value: 32.766819308009076 - type: main_score value: 32.766819308009076 - type: precision value: 32.1637493670237 - type: recall value: 34.683794466403164 task: type: BitextMining - dataset: config: run_Latn-rus_Cyrl name: MTEB FloresBitextMining (run_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.399209486166 - type: f1 value: 81.10578750604326 - type: main_score value: 81.10578750604326 - type: precision value: 80.16763162673529 - type: recall value: 83.399209486166 task: type: BitextMining - dataset: config: tam_Taml-rus_Cyrl name: MTEB FloresBitextMining (tam_Taml-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.01548089591567 - type: main_score value: 98.01548089591567 - type: precision value: 97.84020327498588 - type: recall value: 98.41897233201581 task: type: BitextMining - dataset: config: vie_Latn-rus_Cyrl name: MTEB FloresBitextMining (vie_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: apc_Arab-rus_Cyrl name: MTEB FloresBitextMining (apc_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.87351778656127 - type: f1 value: 92.10803689064558 - type: main_score value: 92.10803689064558 - type: precision value: 91.30434782608695 - type: recall value: 93.87351778656127 task: type: BitextMining - dataset: config: bug_Latn-rus_Cyrl name: MTEB FloresBitextMining (bug_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 57.608695652173914 - type: f1 value: 54.95878654927162 - type: main_score value: 54.95878654927162 - type: precision value: 54.067987427805654 - type: recall value: 57.608695652173914 task: type: BitextMining - dataset: config: fon_Latn-rus_Cyrl name: MTEB FloresBitextMining (fon_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 61.95652173913043 - type: f1 value: 58.06537275812945 - type: main_score value: 58.06537275812945 - type: precision value: 56.554057596959204 - type: recall value: 61.95652173913043 task: type: BitextMining - dataset: config: jav_Latn-rus_Cyrl name: MTEB FloresBitextMining (jav_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.47826086956522 - type: f1 value: 92.4784405318002 - type: main_score value: 92.4784405318002 - type: precision value: 92.09168143201127 - type: recall value: 93.47826086956522 task: type: BitextMining - dataset: config: lao_Laoo-rus_Cyrl name: MTEB FloresBitextMining (lao_Laoo-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.10671936758892 - type: f1 value: 89.76104922745239 - type: main_score value: 89.76104922745239 - type: precision value: 89.24754593232855 - type: recall value: 91.10671936758892 task: type: BitextMining - dataset: config: mri_Latn-rus_Cyrl name: MTEB FloresBitextMining (mri_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 71.14624505928853 - type: f1 value: 68.26947125119062 - type: main_score value: 68.26947125119062 - type: precision value: 67.15942311051006 - type: recall value: 71.14624505928853 task: type: BitextMining - dataset: config: rus_Cyrl-ace_Arab name: MTEB FloresBitextMining (rus_Cyrl-ace_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 19.565217391304348 - type: f1 value: 16.321465000323805 - type: main_score value: 16.321465000323805 - type: precision value: 15.478527409347508 - type: recall value: 19.565217391304348 task: type: BitextMining - dataset: config: rus_Cyrl-bam_Latn name: MTEB FloresBitextMining (rus_Cyrl-bam_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 73.41897233201581 - type: f1 value: 68.77366228182746 - type: main_score value: 68.77366228182746 - type: precision value: 66.96012924273795 - type: recall value: 73.41897233201581 task: type: BitextMining - dataset: config: rus_Cyrl-dzo_Tibt name: MTEB FloresBitextMining (rus_Cyrl-dzo_Tibt) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 0.592885375494071 - type: f1 value: 0.02458062426370458 - type: main_score value: 0.02458062426370458 - type: precision value: 0.012824114724683876 - type: recall value: 0.592885375494071 task: type: BitextMining - dataset: config: rus_Cyrl-hin_Deva name: MTEB FloresBitextMining (rus_Cyrl-hin_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 task: type: BitextMining - dataset: config: rus_Cyrl-khm_Khmr name: MTEB FloresBitextMining (rus_Cyrl-khm_Khmr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.13438735177866 - type: f1 value: 96.24505928853755 - type: main_score value: 96.24505928853755 - type: precision value: 95.81686429512516 - type: recall value: 97.13438735177866 task: type: BitextMining - dataset: config: rus_Cyrl-mag_Deva name: MTEB FloresBitextMining (rus_Cyrl-mag_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.50592885375494 - type: f1 value: 99.35770750988142 - type: main_score value: 99.35770750988142 - type: precision value: 99.29183135704875 - type: recall value: 99.50592885375494 task: type: BitextMining - dataset: config: rus_Cyrl-pap_Latn name: MTEB FloresBitextMining (rus_Cyrl-pap_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.93675889328063 - type: f1 value: 96.05072463768116 - type: main_score value: 96.05072463768116 - type: precision value: 95.66040843214758 - type: recall value: 96.93675889328063 task: type: BitextMining - dataset: config: rus_Cyrl-sot_Latn name: MTEB FloresBitextMining (rus_Cyrl-sot_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.67588932806325 - type: f1 value: 91.7786561264822 - type: main_score value: 91.7786561264822 - type: precision value: 90.91238471673255 - type: recall value: 93.67588932806325 task: type: BitextMining - dataset: config: rus_Cyrl-tur_Latn name: MTEB FloresBitextMining (rus_Cyrl-tur_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: rus_Cyrl-ace_Latn name: MTEB FloresBitextMining (rus_Cyrl-ace_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 74.1106719367589 - type: f1 value: 70.21737923911836 - type: main_score value: 70.21737923911836 - type: precision value: 68.7068791410511 - type: recall value: 74.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-ban_Latn name: MTEB FloresBitextMining (rus_Cyrl-ban_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.7193675889328 - type: f1 value: 78.76470334510617 - type: main_score value: 78.76470334510617 - type: precision value: 77.76208475761422 - type: recall value: 81.7193675889328 task: type: BitextMining - dataset: config: rus_Cyrl-ell_Grek name: MTEB FloresBitextMining (rus_Cyrl-ell_Grek) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368908 - type: main_score value: 97.76021080368908 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 task: type: BitextMining - dataset: config: rus_Cyrl-hne_Deva name: MTEB FloresBitextMining (rus_Cyrl-hne_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.0566534914361 - type: main_score value: 98.0566534914361 - type: precision value: 97.82608695652173 - type: recall value: 98.51778656126481 task: type: BitextMining - dataset: config: rus_Cyrl-kik_Latn name: MTEB FloresBitextMining (rus_Cyrl-kik_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 76.42689244220864 - type: main_score value: 76.42689244220864 - type: precision value: 74.63877909530083 - type: recall value: 80.73122529644269 task: type: BitextMining - dataset: config: rus_Cyrl-mai_Deva name: MTEB FloresBitextMining (rus_Cyrl-mai_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380763 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-pbt_Arab name: MTEB FloresBitextMining (rus_Cyrl-pbt_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 96.73913043478261 - type: main_score value: 96.73913043478261 - type: precision value: 96.36034255599473 - type: recall value: 97.5296442687747 task: type: BitextMining - dataset: config: rus_Cyrl-spa_Latn name: MTEB FloresBitextMining (rus_Cyrl-spa_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.20948616600789 - type: main_score value: 99.20948616600789 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-twi_Latn name: MTEB FloresBitextMining (rus_Cyrl-twi_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.01581027667984 - type: f1 value: 78.064787822953 - type: main_score value: 78.064787822953 - type: precision value: 76.43272186750448 - type: recall value: 82.01581027667984 task: type: BitextMining - dataset: config: rus_Cyrl-acm_Arab name: MTEB FloresBitextMining (rus_Cyrl-acm_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368908 - type: main_score value: 97.76021080368908 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 task: type: BitextMining - dataset: config: rus_Cyrl-bel_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-bel_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.67786561264822 - type: main_score value: 97.67786561264822 - type: precision value: 97.4308300395257 - type: recall value: 98.22134387351778 task: type: BitextMining - dataset: config: rus_Cyrl-eng_Latn name: MTEB FloresBitextMining (rus_Cyrl-eng_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 task: type: BitextMining - dataset: config: rus_Cyrl-hrv_Latn name: MTEB FloresBitextMining (rus_Cyrl-hrv_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.83069828722002 - type: main_score value: 98.83069828722002 - type: precision value: 98.69894598155466 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-kin_Latn name: MTEB FloresBitextMining (rus_Cyrl-kin_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.37944664031622 - type: f1 value: 91.53162055335969 - type: main_score value: 91.53162055335969 - type: precision value: 90.71475625823452 - type: recall value: 93.37944664031622 task: type: BitextMining - dataset: config: rus_Cyrl-mal_Mlym name: MTEB FloresBitextMining (rus_Cyrl-mal_Mlym) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-pes_Arab name: MTEB FloresBitextMining (rus_Cyrl-pes_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-srd_Latn name: MTEB FloresBitextMining (rus_Cyrl-srd_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.03162055335969 - type: f1 value: 86.11048371917937 - type: main_score value: 86.11048371917937 - type: precision value: 84.86001317523056 - type: recall value: 89.03162055335969 task: type: BitextMining - dataset: config: rus_Cyrl-tzm_Tfng name: MTEB FloresBitextMining (rus_Cyrl-tzm_Tfng) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 12.351778656126482 - type: f1 value: 10.112177999067715 - type: main_score value: 10.112177999067715 - type: precision value: 9.53495885438645 - type: recall value: 12.351778656126482 task: type: BitextMining - dataset: config: rus_Cyrl-acq_Arab name: MTEB FloresBitextMining (rus_Cyrl-acq_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-bem_Latn name: MTEB FloresBitextMining (rus_Cyrl-bem_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 73.22134387351778 - type: f1 value: 68.30479412989295 - type: main_score value: 68.30479412989295 - type: precision value: 66.40073447632736 - type: recall value: 73.22134387351778 task: type: BitextMining - dataset: config: rus_Cyrl-epo_Latn name: MTEB FloresBitextMining (rus_Cyrl-epo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-hun_Latn name: MTEB FloresBitextMining (rus_Cyrl-hun_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 95.88274044795784 - type: main_score value: 95.88274044795784 - type: precision value: 95.45454545454545 - type: recall value: 96.83794466403161 task: type: BitextMining - dataset: config: rus_Cyrl-kir_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-kir_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.34387351778656 - type: f1 value: 95.49280429715212 - type: main_score value: 95.49280429715212 - type: precision value: 95.14163372859026 - type: recall value: 96.34387351778656 task: type: BitextMining - dataset: config: rus_Cyrl-mar_Deva name: MTEB FloresBitextMining (rus_Cyrl-mar_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635047 - type: main_score value: 98.28722002635047 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-plt_Latn name: MTEB FloresBitextMining (rus_Cyrl-plt_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.04347826086956 - type: f1 value: 85.14328063241106 - type: main_score value: 85.14328063241106 - type: precision value: 83.96339168078298 - type: recall value: 88.04347826086956 task: type: BitextMining - dataset: config: rus_Cyrl-srp_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-srp_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-uig_Arab name: MTEB FloresBitextMining (rus_Cyrl-uig_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.19367588932806 - type: f1 value: 89.98541313758706 - type: main_score value: 89.98541313758706 - type: precision value: 89.01021080368906 - type: recall value: 92.19367588932806 task: type: BitextMining - dataset: config: rus_Cyrl-aeb_Arab name: MTEB FloresBitextMining (rus_Cyrl-aeb_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 94.63109354413703 - type: main_score value: 94.63109354413703 - type: precision value: 94.05467720685111 - type: recall value: 95.8498023715415 task: type: BitextMining - dataset: config: rus_Cyrl-ben_Beng name: MTEB FloresBitextMining (rus_Cyrl-ben_Beng) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-est_Latn name: MTEB FloresBitextMining (rus_Cyrl-est_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.2588932806324 - type: main_score value: 94.2588932806324 - type: precision value: 93.65118577075098 - type: recall value: 95.55335968379447 task: type: BitextMining - dataset: config: rus_Cyrl-hye_Armn name: MTEB FloresBitextMining (rus_Cyrl-hye_Armn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635045 - type: main_score value: 98.28722002635045 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-kmb_Latn name: MTEB FloresBitextMining (rus_Cyrl-kmb_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 54.24901185770751 - type: f1 value: 49.46146674116913 - type: main_score value: 49.46146674116913 - type: precision value: 47.81033799314432 - type: recall value: 54.24901185770751 task: type: BitextMining - dataset: config: rus_Cyrl-min_Arab name: MTEB FloresBitextMining (rus_Cyrl-min_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 15.810276679841898 - type: f1 value: 13.271207641419332 - type: main_score value: 13.271207641419332 - type: precision value: 12.510673148766033 - type: recall value: 15.810276679841898 task: type: BitextMining - dataset: config: rus_Cyrl-pol_Latn name: MTEB FloresBitextMining (rus_Cyrl-pol_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.32674571805006 - type: main_score value: 98.32674571805006 - type: precision value: 98.14723320158103 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-ssw_Latn name: MTEB FloresBitextMining (rus_Cyrl-ssw_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.8300395256917 - type: f1 value: 76.51717847370023 - type: main_score value: 76.51717847370023 - type: precision value: 74.74143610013175 - type: recall value: 80.8300395256917 task: type: BitextMining - dataset: config: rus_Cyrl-ukr_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-ukr_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: rus_Cyrl-afr_Latn name: MTEB FloresBitextMining (rus_Cyrl-afr_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-bho_Deva name: MTEB FloresBitextMining (rus_Cyrl-bho_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.6403162055336 - type: f1 value: 95.56982872200265 - type: main_score value: 95.56982872200265 - type: precision value: 95.0592885375494 - type: recall value: 96.6403162055336 task: type: BitextMining - dataset: config: rus_Cyrl-eus_Latn name: MTEB FloresBitextMining (rus_Cyrl-eus_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.62845849802372 - type: f1 value: 96.9038208168643 - type: main_score value: 96.9038208168643 - type: precision value: 96.55797101449275 - type: recall value: 97.62845849802372 task: type: BitextMining - dataset: config: rus_Cyrl-ibo_Latn name: MTEB FloresBitextMining (rus_Cyrl-ibo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.2292490118577 - type: f1 value: 86.35234330886506 - type: main_score value: 86.35234330886506 - type: precision value: 85.09881422924902 - type: recall value: 89.2292490118577 task: type: BitextMining - dataset: config: rus_Cyrl-kmr_Latn name: MTEB FloresBitextMining (rus_Cyrl-kmr_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.49802371541502 - type: f1 value: 79.23630717108978 - type: main_score value: 79.23630717108978 - type: precision value: 77.48188405797102 - type: recall value: 83.49802371541502 task: type: BitextMining - dataset: config: rus_Cyrl-min_Latn name: MTEB FloresBitextMining (rus_Cyrl-min_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 79.34782608695652 - type: f1 value: 75.31689928429059 - type: main_score value: 75.31689928429059 - type: precision value: 73.91519410541149 - type: recall value: 79.34782608695652 task: type: BitextMining - dataset: config: rus_Cyrl-por_Latn name: MTEB FloresBitextMining (rus_Cyrl-por_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.54150197628458 - type: f1 value: 95.53218520609825 - type: main_score value: 95.53218520609825 - type: precision value: 95.07575757575756 - type: recall value: 96.54150197628458 task: type: BitextMining - dataset: config: rus_Cyrl-sun_Latn name: MTEB FloresBitextMining (rus_Cyrl-sun_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.2806324110672 - type: f1 value: 91.56973461321287 - type: main_score value: 91.56973461321287 - type: precision value: 90.84396334890405 - type: recall value: 93.2806324110672 task: type: BitextMining - dataset: config: rus_Cyrl-umb_Latn name: MTEB FloresBitextMining (rus_Cyrl-umb_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 51.87747035573123 - type: f1 value: 46.36591778884269 - type: main_score value: 46.36591778884269 - type: precision value: 44.57730391234227 - type: recall value: 51.87747035573123 task: type: BitextMining - dataset: config: rus_Cyrl-ajp_Arab name: MTEB FloresBitextMining (rus_Cyrl-ajp_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-bjn_Arab name: MTEB FloresBitextMining (rus_Cyrl-bjn_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 14.82213438735178 - type: f1 value: 12.365434276616856 - type: main_score value: 12.365434276616856 - type: precision value: 11.802079517180589 - type: recall value: 14.82213438735178 task: type: BitextMining - dataset: config: rus_Cyrl-ewe_Latn name: MTEB FloresBitextMining (rus_Cyrl-ewe_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 71.44268774703558 - type: f1 value: 66.74603174603175 - type: main_score value: 66.74603174603175 - type: precision value: 64.99933339607253 - type: recall value: 71.44268774703558 task: type: BitextMining - dataset: config: rus_Cyrl-ilo_Latn name: MTEB FloresBitextMining (rus_Cyrl-ilo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.86956521739131 - type: f1 value: 83.00139015960917 - type: main_score value: 83.00139015960917 - type: precision value: 81.91411396574439 - type: recall value: 85.86956521739131 task: type: BitextMining - dataset: config: rus_Cyrl-knc_Arab name: MTEB FloresBitextMining (rus_Cyrl-knc_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 14.525691699604742 - type: f1 value: 12.618283715726806 - type: main_score value: 12.618283715726806 - type: precision value: 12.048458493742352 - type: recall value: 14.525691699604742 task: type: BitextMining - dataset: config: rus_Cyrl-mkd_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-mkd_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.22595520421606 - type: main_score value: 99.22595520421606 - type: precision value: 99.14361001317523 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-prs_Arab name: MTEB FloresBitextMining (rus_Cyrl-prs_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-swe_Latn name: MTEB FloresBitextMining (rus_Cyrl-swe_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034256 - type: main_score value: 99.07773386034256 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-urd_Arab name: MTEB FloresBitextMining (rus_Cyrl-urd_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.61660079051383 - type: f1 value: 98.15546772068511 - type: main_score value: 98.15546772068511 - type: precision value: 97.92490118577075 - type: recall value: 98.61660079051383 task: type: BitextMining - dataset: config: rus_Cyrl-aka_Latn name: MTEB FloresBitextMining (rus_Cyrl-aka_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.02766798418972 - type: f1 value: 76.73277809147375 - type: main_score value: 76.73277809147375 - type: precision value: 74.97404165882426 - type: recall value: 81.02766798418972 task: type: BitextMining - dataset: config: rus_Cyrl-bjn_Latn name: MTEB FloresBitextMining (rus_Cyrl-bjn_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.7588932806324 - type: f1 value: 83.92064566965753 - type: main_score value: 83.92064566965753 - type: precision value: 82.83734079929732 - type: recall value: 86.7588932806324 task: type: BitextMining - dataset: config: rus_Cyrl-fao_Latn name: MTEB FloresBitextMining (rus_Cyrl-fao_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.43873517786561 - type: f1 value: 85.48136645962732 - type: main_score value: 85.48136645962732 - type: precision value: 84.23418972332016 - type: recall value: 88.43873517786561 task: type: BitextMining - dataset: config: rus_Cyrl-ind_Latn name: MTEB FloresBitextMining (rus_Cyrl-ind_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: rus_Cyrl-knc_Latn name: MTEB FloresBitextMining (rus_Cyrl-knc_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 45.8498023715415 - type: f1 value: 40.112030865489366 - type: main_score value: 40.112030865489366 - type: precision value: 38.28262440050776 - type: recall value: 45.8498023715415 task: type: BitextMining - dataset: config: rus_Cyrl-mlt_Latn name: MTEB FloresBitextMining (rus_Cyrl-mlt_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.18181818181817 - type: f1 value: 91.30787690570298 - type: main_score value: 91.30787690570298 - type: precision value: 90.4983060417843 - type: recall value: 93.18181818181817 task: type: BitextMining - dataset: config: rus_Cyrl-quy_Latn name: MTEB FloresBitextMining (rus_Cyrl-quy_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 62.450592885375485 - type: f1 value: 57.28742975628178 - type: main_score value: 57.28742975628178 - type: precision value: 55.56854987623269 - type: recall value: 62.450592885375485 task: type: BitextMining - dataset: config: rus_Cyrl-swh_Latn name: MTEB FloresBitextMining (rus_Cyrl-swh_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.77667984189723 - type: main_score value: 97.77667984189723 - type: precision value: 97.51317523056655 - type: recall value: 98.3201581027668 task: type: BitextMining - dataset: config: rus_Cyrl-uzn_Latn name: MTEB FloresBitextMining (rus_Cyrl-uzn_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.59081498211933 - type: main_score value: 97.59081498211933 - type: precision value: 97.34848484848484 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: rus_Cyrl-als_Latn name: MTEB FloresBitextMining (rus_Cyrl-als_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.09420289855073 - type: main_score value: 99.09420289855073 - type: precision value: 98.99538866930172 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-bod_Tibt name: MTEB FloresBitextMining (rus_Cyrl-bod_Tibt) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 11.561264822134387 - type: f1 value: 8.121312045385636 - type: main_score value: 8.121312045385636 - type: precision value: 7.350577020893972 - type: recall value: 11.561264822134387 task: type: BitextMining - dataset: config: rus_Cyrl-fij_Latn name: MTEB FloresBitextMining (rus_Cyrl-fij_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 72.23320158102767 - type: f1 value: 67.21000233846082 - type: main_score value: 67.21000233846082 - type: precision value: 65.3869439739005 - type: recall value: 72.23320158102767 task: type: BitextMining - dataset: config: rus_Cyrl-isl_Latn name: MTEB FloresBitextMining (rus_Cyrl-isl_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.99604743083005 - type: f1 value: 89.75955204216073 - type: main_score value: 89.75955204216073 - type: precision value: 88.7598814229249 - type: recall value: 91.99604743083005 task: type: BitextMining - dataset: config: rus_Cyrl-kon_Latn name: MTEB FloresBitextMining (rus_Cyrl-kon_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.81818181818183 - type: f1 value: 77.77800098452272 - type: main_score value: 77.77800098452272 - type: precision value: 76.1521268586486 - type: recall value: 81.81818181818183 task: type: BitextMining - dataset: config: rus_Cyrl-mni_Beng name: MTEB FloresBitextMining (rus_Cyrl-mni_Beng) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 54.74308300395256 - type: f1 value: 48.97285299254615 - type: main_score value: 48.97285299254615 - type: precision value: 46.95125742968299 - type: recall value: 54.74308300395256 task: type: BitextMining - dataset: config: rus_Cyrl-ron_Latn name: MTEB FloresBitextMining (rus_Cyrl-ron_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.64492753623189 - type: main_score value: 97.64492753623189 - type: precision value: 97.36495388669302 - type: recall value: 98.22134387351778 task: type: BitextMining - dataset: config: rus_Cyrl-szl_Latn name: MTEB FloresBitextMining (rus_Cyrl-szl_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.09486166007905 - type: f1 value: 90.10375494071147 - type: main_score value: 90.10375494071147 - type: precision value: 89.29606625258798 - type: recall value: 92.09486166007905 task: type: BitextMining - dataset: config: rus_Cyrl-vec_Latn name: MTEB FloresBitextMining (rus_Cyrl-vec_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 90.51430453604365 - type: main_score value: 90.51430453604365 - type: precision value: 89.69367588932808 - type: recall value: 92.4901185770751 task: type: BitextMining - dataset: config: rus_Cyrl-amh_Ethi name: MTEB FloresBitextMining (rus_Cyrl-amh_Ethi) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.11791831357048 - type: main_score value: 97.11791831357048 - type: precision value: 96.77206851119894 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: rus_Cyrl-bos_Latn name: MTEB FloresBitextMining (rus_Cyrl-bos_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-fin_Latn name: MTEB FloresBitextMining (rus_Cyrl-fin_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.4235836627141 - type: main_score value: 94.4235836627141 - type: precision value: 93.84881422924902 - type: recall value: 95.65217391304348 task: type: BitextMining - dataset: config: rus_Cyrl-ita_Latn name: MTEB FloresBitextMining (rus_Cyrl-ita_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768117 - type: main_score value: 98.55072463768117 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-kor_Hang name: MTEB FloresBitextMining (rus_Cyrl-kor_Hang) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.15349143610013 - type: main_score value: 94.15349143610013 - type: precision value: 93.49472990777339 - type: recall value: 95.55335968379447 task: type: BitextMining - dataset: config: rus_Cyrl-mos_Latn name: MTEB FloresBitextMining (rus_Cyrl-mos_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 43.67588932806324 - type: f1 value: 38.84849721190082 - type: main_score value: 38.84849721190082 - type: precision value: 37.43294462099682 - type: recall value: 43.67588932806324 task: type: BitextMining - dataset: config: rus_Cyrl-run_Latn name: MTEB FloresBitextMining (rus_Cyrl-run_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 87.37483530961792 - type: main_score value: 87.37483530961792 - type: precision value: 86.07872200263506 - type: recall value: 90.21739130434783 task: type: BitextMining - dataset: config: rus_Cyrl-tam_Taml name: MTEB FloresBitextMining (rus_Cyrl-tam_Taml) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-vie_Latn name: MTEB FloresBitextMining (rus_Cyrl-vie_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.03557312252964 - type: f1 value: 96.13636363636364 - type: main_score value: 96.13636363636364 - type: precision value: 95.70981554677206 - type: recall value: 97.03557312252964 task: type: BitextMining - dataset: config: rus_Cyrl-apc_Arab name: MTEB FloresBitextMining (rus_Cyrl-apc_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.49670619235836 - type: main_score value: 97.49670619235836 - type: precision value: 97.18379446640316 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: rus_Cyrl-bug_Latn name: MTEB FloresBitextMining (rus_Cyrl-bug_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 67.29249011857708 - type: f1 value: 62.09268717667927 - type: main_score value: 62.09268717667927 - type: precision value: 60.28554009748714 - type: recall value: 67.29249011857708 task: type: BitextMining - dataset: config: rus_Cyrl-fon_Latn name: MTEB FloresBitextMining (rus_Cyrl-fon_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 63.43873517786561 - type: f1 value: 57.66660107569199 - type: main_score value: 57.66660107569199 - type: precision value: 55.66676396919363 - type: recall value: 63.43873517786561 task: type: BitextMining - dataset: config: rus_Cyrl-jav_Latn name: MTEB FloresBitextMining (rus_Cyrl-jav_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.46640316205533 - type: f1 value: 92.89384528514964 - type: main_score value: 92.89384528514964 - type: precision value: 92.19367588932806 - type: recall value: 94.46640316205533 task: type: BitextMining - dataset: config: rus_Cyrl-lao_Laoo name: MTEB FloresBitextMining (rus_Cyrl-lao_Laoo) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.40974967061922 - type: main_score value: 96.40974967061922 - type: precision value: 96.034255599473 - type: recall value: 97.23320158102767 task: type: BitextMining - dataset: config: rus_Cyrl-mri_Latn name: MTEB FloresBitextMining (rus_Cyrl-mri_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 76.77865612648222 - type: f1 value: 73.11286539547409 - type: main_score value: 73.11286539547409 - type: precision value: 71.78177214337046 - type: recall value: 76.77865612648222 task: type: BitextMining - dataset: config: rus_Cyrl-taq_Latn name: MTEB FloresBitextMining (rus_Cyrl-taq_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 41.99604743083004 - type: f1 value: 37.25127063318763 - type: main_score value: 37.25127063318763 - type: precision value: 35.718929186985726 - type: recall value: 41.99604743083004 task: type: BitextMining - dataset: config: rus_Cyrl-war_Latn name: MTEB FloresBitextMining (rus_Cyrl-war_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.1699604743083 - type: main_score value: 94.1699604743083 - type: precision value: 93.52766798418972 - type: recall value: 95.55335968379447 task: type: BitextMining - dataset: config: rus_Cyrl-arb_Arab name: MTEB FloresBitextMining (rus_Cyrl-arb_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: rus_Cyrl-bul_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-bul_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 task: type: BitextMining - dataset: config: rus_Cyrl-fra_Latn name: MTEB FloresBitextMining (rus_Cyrl-fra_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.47299077733861 - type: main_score value: 99.47299077733861 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: rus_Cyrl-jpn_Jpan name: MTEB FloresBitextMining (rus_Cyrl-jpn_Jpan) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.44268774703558 - type: f1 value: 95.30632411067194 - type: main_score value: 95.30632411067194 - type: precision value: 94.76284584980237 - type: recall value: 96.44268774703558 task: type: BitextMining - dataset: config: rus_Cyrl-lij_Latn name: MTEB FloresBitextMining (rus_Cyrl-lij_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 87.4703557312253 - type: main_score value: 87.4703557312253 - type: precision value: 86.29611330698287 - type: recall value: 90.21739130434783 task: type: BitextMining - dataset: config: rus_Cyrl-mya_Mymr name: MTEB FloresBitextMining (rus_Cyrl-mya_Mymr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.364953886693 - type: main_score value: 97.364953886693 - type: precision value: 97.03557312252964 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: rus_Cyrl-sag_Latn name: MTEB FloresBitextMining (rus_Cyrl-sag_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 54.841897233201585 - type: f1 value: 49.61882037503349 - type: main_score value: 49.61882037503349 - type: precision value: 47.831968755881796 - type: recall value: 54.841897233201585 task: type: BitextMining - dataset: config: rus_Cyrl-taq_Tfng name: MTEB FloresBitextMining (rus_Cyrl-taq_Tfng) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 15.316205533596838 - type: f1 value: 11.614836360389717 - type: main_score value: 11.614836360389717 - type: precision value: 10.741446193235223 - type: recall value: 15.316205533596838 task: type: BitextMining - dataset: config: rus_Cyrl-wol_Latn name: MTEB FloresBitextMining (rus_Cyrl-wol_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 67.88537549407114 - type: f1 value: 62.2536417249856 - type: main_score value: 62.2536417249856 - type: precision value: 60.27629128666678 - type: recall value: 67.88537549407114 task: type: BitextMining - dataset: config: rus_Cyrl-arb_Latn name: MTEB FloresBitextMining (rus_Cyrl-arb_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 27.766798418972332 - type: f1 value: 23.39674889624077 - type: main_score value: 23.39674889624077 - type: precision value: 22.28521155585345 - type: recall value: 27.766798418972332 task: type: BitextMining - dataset: config: rus_Cyrl-cat_Latn name: MTEB FloresBitextMining (rus_Cyrl-cat_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.42151326933936 - type: main_score value: 96.42151326933936 - type: precision value: 96.04743083003953 - type: recall value: 97.23320158102767 task: type: BitextMining - dataset: config: rus_Cyrl-fur_Latn name: MTEB FloresBitextMining (rus_Cyrl-fur_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.63636363636364 - type: f1 value: 85.80792396009788 - type: main_score value: 85.80792396009788 - type: precision value: 84.61508901726293 - type: recall value: 88.63636363636364 task: type: BitextMining - dataset: config: rus_Cyrl-kab_Latn name: MTEB FloresBitextMining (rus_Cyrl-kab_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 48.12252964426877 - type: f1 value: 43.05387582971066 - type: main_score value: 43.05387582971066 - type: precision value: 41.44165117538212 - type: recall value: 48.12252964426877 task: type: BitextMining - dataset: config: rus_Cyrl-lim_Latn name: MTEB FloresBitextMining (rus_Cyrl-lim_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.81818181818183 - type: f1 value: 77.81676163099087 - type: main_score value: 77.81676163099087 - type: precision value: 76.19565217391305 - type: recall value: 81.81818181818183 task: type: BitextMining - dataset: config: rus_Cyrl-nld_Latn name: MTEB FloresBitextMining (rus_Cyrl-nld_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.4756258234519 - type: main_score value: 96.4756258234519 - type: precision value: 96.06389986824769 - type: recall value: 97.33201581027669 task: type: BitextMining - dataset: config: rus_Cyrl-san_Deva name: MTEB FloresBitextMining (rus_Cyrl-san_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.47826086956522 - type: f1 value: 91.70289855072463 - type: main_score value: 91.70289855072463 - type: precision value: 90.9370882740448 - type: recall value: 93.47826086956522 task: type: BitextMining - dataset: config: rus_Cyrl-tat_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-tat_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.72727272727273 - type: f1 value: 97.00263504611331 - type: main_score value: 97.00263504611331 - type: precision value: 96.65678524374177 - type: recall value: 97.72727272727273 task: type: BitextMining - dataset: config: rus_Cyrl-xho_Latn name: MTEB FloresBitextMining (rus_Cyrl-xho_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.08300395256917 - type: f1 value: 91.12977602108036 - type: main_score value: 91.12977602108036 - type: precision value: 90.22562582345192 - type: recall value: 93.08300395256917 task: type: BitextMining - dataset: config: rus_Cyrl-ars_Arab name: MTEB FloresBitextMining (rus_Cyrl-ars_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 task: type: BitextMining - dataset: config: rus_Cyrl-ceb_Latn name: MTEB FloresBitextMining (rus_Cyrl-ceb_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.3544137022398 - type: main_score value: 94.3544137022398 - type: precision value: 93.76646903820817 - type: recall value: 95.65217391304348 task: type: BitextMining - dataset: config: rus_Cyrl-fuv_Latn name: MTEB FloresBitextMining (rus_Cyrl-fuv_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 51.18577075098815 - type: f1 value: 44.5990252610806 - type: main_score value: 44.5990252610806 - type: precision value: 42.34331599450177 - type: recall value: 51.18577075098815 task: type: BitextMining - dataset: config: rus_Cyrl-kac_Latn name: MTEB FloresBitextMining (rus_Cyrl-kac_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 46.93675889328063 - type: f1 value: 41.79004018701787 - type: main_score value: 41.79004018701787 - type: precision value: 40.243355662392624 - type: recall value: 46.93675889328063 task: type: BitextMining - dataset: config: rus_Cyrl-lin_Latn name: MTEB FloresBitextMining (rus_Cyrl-lin_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.50197628458498 - type: f1 value: 89.1205533596838 - type: main_score value: 89.1205533596838 - type: precision value: 88.07147562582345 - type: recall value: 91.50197628458498 task: type: BitextMining - dataset: config: rus_Cyrl-nno_Latn name: MTEB FloresBitextMining (rus_Cyrl-nno_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.41897233201581 - type: main_score value: 98.41897233201581 - type: precision value: 98.22134387351778 - type: recall value: 98.81422924901186 task: type: BitextMining - dataset: config: rus_Cyrl-sat_Olck name: MTEB FloresBitextMining (rus_Cyrl-sat_Olck) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 2.371541501976284 - type: f1 value: 1.0726274943087382 - type: main_score value: 1.0726274943087382 - type: precision value: 0.875279634748803 - type: recall value: 2.371541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-tel_Telu name: MTEB FloresBitextMining (rus_Cyrl-tel_Telu) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: rus_Cyrl-ydd_Hebr name: MTEB FloresBitextMining (rus_Cyrl-ydd_Hebr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.42687747035573 - type: f1 value: 86.47609636740073 - type: main_score value: 86.47609636740073 - type: precision value: 85.13669301712781 - type: recall value: 89.42687747035573 task: type: BitextMining - dataset: config: rus_Cyrl-ary_Arab name: MTEB FloresBitextMining (rus_Cyrl-ary_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.82213438735178 - type: f1 value: 87.04545454545456 - type: main_score value: 87.04545454545456 - type: precision value: 85.76910408432148 - type: recall value: 89.82213438735178 task: type: BitextMining - dataset: config: rus_Cyrl-ces_Latn name: MTEB FloresBitextMining (rus_Cyrl-ces_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: rus_Cyrl-gaz_Latn name: MTEB FloresBitextMining (rus_Cyrl-gaz_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 64.9209486166008 - type: f1 value: 58.697458119394874 - type: main_score value: 58.697458119394874 - type: precision value: 56.43402189597842 - type: recall value: 64.9209486166008 task: type: BitextMining - dataset: config: rus_Cyrl-kam_Latn name: MTEB FloresBitextMining (rus_Cyrl-kam_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 59.18972332015811 - type: f1 value: 53.19031511966295 - type: main_score value: 53.19031511966295 - type: precision value: 51.08128357343655 - type: recall value: 59.18972332015811 task: type: BitextMining - dataset: config: rus_Cyrl-lit_Latn name: MTEB FloresBitextMining (rus_Cyrl-lit_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.54150197628458 - type: f1 value: 95.5368906455863 - type: main_score value: 95.5368906455863 - type: precision value: 95.0592885375494 - type: recall value: 96.54150197628458 task: type: BitextMining - dataset: config: rus_Cyrl-nob_Latn name: MTEB FloresBitextMining (rus_Cyrl-nob_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.51317523056655 - type: main_score value: 97.51317523056655 - type: precision value: 97.2167325428195 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: rus_Cyrl-scn_Latn name: MTEB FloresBitextMining (rus_Cyrl-scn_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 84.0909090909091 - type: f1 value: 80.37000439174352 - type: main_score value: 80.37000439174352 - type: precision value: 78.83994628559846 - type: recall value: 84.0909090909091 task: type: BitextMining - dataset: config: rus_Cyrl-tgk_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-tgk_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.68774703557312 - type: f1 value: 90.86344814605684 - type: main_score value: 90.86344814605684 - type: precision value: 90.12516469038208 - type: recall value: 92.68774703557312 task: type: BitextMining - dataset: config: rus_Cyrl-yor_Latn name: MTEB FloresBitextMining (rus_Cyrl-yor_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 72.13438735177866 - type: f1 value: 66.78759646150951 - type: main_score value: 66.78759646150951 - type: precision value: 64.85080192096002 - type: recall value: 72.13438735177866 task: type: BitextMining - dataset: config: rus_Cyrl-arz_Arab name: MTEB FloresBitextMining (rus_Cyrl-arz_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.364953886693 - type: main_score value: 97.364953886693 - type: precision value: 97.03557312252964 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: rus_Cyrl-cjk_Latn name: MTEB FloresBitextMining (rus_Cyrl-cjk_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 51.976284584980235 - type: f1 value: 46.468762353149714 - type: main_score value: 46.468762353149714 - type: precision value: 44.64073366247278 - type: recall value: 51.976284584980235 task: type: BitextMining - dataset: config: rus_Cyrl-gla_Latn name: MTEB FloresBitextMining (rus_Cyrl-gla_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 79.74308300395256 - type: f1 value: 75.55611165294958 - type: main_score value: 75.55611165294958 - type: precision value: 73.95033408620365 - type: recall value: 79.74308300395256 task: type: BitextMining - dataset: config: rus_Cyrl-kan_Knda name: MTEB FloresBitextMining (rus_Cyrl-kan_Knda) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.96245059288538 - type: main_score value: 98.96245059288538 - type: precision value: 98.84716732542819 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: rus_Cyrl-lmo_Latn name: MTEB FloresBitextMining (rus_Cyrl-lmo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.41106719367589 - type: f1 value: 78.56413514022209 - type: main_score value: 78.56413514022209 - type: precision value: 77.15313068573938 - type: recall value: 82.41106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-npi_Deva name: MTEB FloresBitextMining (rus_Cyrl-npi_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.3201581027668 - type: main_score value: 98.3201581027668 - type: precision value: 98.12252964426878 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-shn_Mymr name: MTEB FloresBitextMining (rus_Cyrl-shn_Mymr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 57.11462450592886 - type: f1 value: 51.51361369197337 - type: main_score value: 51.51361369197337 - type: precision value: 49.71860043649573 - type: recall value: 57.11462450592886 task: type: BitextMining - dataset: config: rus_Cyrl-tgl_Latn name: MTEB FloresBitextMining (rus_Cyrl-tgl_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.18379446640316 - type: main_score value: 97.18379446640316 - type: precision value: 96.88735177865613 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: rus_Cyrl-yue_Hant name: MTEB FloresBitextMining (rus_Cyrl-yue_Hant) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.09420289855072 - type: main_score value: 99.09420289855072 - type: precision value: 98.9953886693017 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-asm_Beng name: MTEB FloresBitextMining (rus_Cyrl-asm_Beng) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.16007905138339 - type: main_score value: 94.16007905138339 - type: precision value: 93.50296442687747 - type: recall value: 95.55335968379447 task: type: BitextMining - dataset: config: rus_Cyrl-ckb_Arab name: MTEB FloresBitextMining (rus_Cyrl-ckb_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.88537549407114 - type: f1 value: 90.76745718050066 - type: main_score value: 90.76745718050066 - type: precision value: 89.80072463768116 - type: recall value: 92.88537549407114 task: type: BitextMining - dataset: config: rus_Cyrl-gle_Latn name: MTEB FloresBitextMining (rus_Cyrl-gle_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.699604743083 - type: f1 value: 89.40899680030115 - type: main_score value: 89.40899680030115 - type: precision value: 88.40085638998683 - type: recall value: 91.699604743083 task: type: BitextMining - dataset: config: rus_Cyrl-kas_Arab name: MTEB FloresBitextMining (rus_Cyrl-kas_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 88.3399209486166 - type: f1 value: 85.14351590438548 - type: main_score value: 85.14351590438548 - type: precision value: 83.72364953886692 - type: recall value: 88.3399209486166 task: type: BitextMining - dataset: config: rus_Cyrl-ltg_Latn name: MTEB FloresBitextMining (rus_Cyrl-ltg_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.399209486166 - type: f1 value: 79.88408934061107 - type: main_score value: 79.88408934061107 - type: precision value: 78.53794509179885 - type: recall value: 83.399209486166 task: type: BitextMining - dataset: config: rus_Cyrl-nso_Latn name: MTEB FloresBitextMining (rus_Cyrl-nso_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.20553359683794 - type: f1 value: 88.95406635525212 - type: main_score value: 88.95406635525212 - type: precision value: 88.01548089591567 - type: recall value: 91.20553359683794 task: type: BitextMining - dataset: config: rus_Cyrl-sin_Sinh name: MTEB FloresBitextMining (rus_Cyrl-sin_Sinh) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380763 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-tha_Thai name: MTEB FloresBitextMining (rus_Cyrl-tha_Thai) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.94861660079052 - type: f1 value: 94.66403162055336 - type: main_score value: 94.66403162055336 - type: precision value: 94.03820816864295 - type: recall value: 95.94861660079052 task: type: BitextMining - dataset: config: rus_Cyrl-zho_Hans name: MTEB FloresBitextMining (rus_Cyrl-zho_Hans) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.5909090909091 - type: main_score value: 96.5909090909091 - type: precision value: 96.17918313570487 - type: recall value: 97.4308300395257 task: type: BitextMining - dataset: config: rus_Cyrl-ast_Latn name: MTEB FloresBitextMining (rus_Cyrl-ast_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.46640316205533 - type: f1 value: 92.86890645586297 - type: main_score value: 92.86890645586297 - type: precision value: 92.14756258234519 - type: recall value: 94.46640316205533 task: type: BitextMining - dataset: config: rus_Cyrl-crh_Latn name: MTEB FloresBitextMining (rus_Cyrl-crh_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.66403162055336 - type: f1 value: 93.2663592446201 - type: main_score value: 93.2663592446201 - type: precision value: 92.66716073781292 - type: recall value: 94.66403162055336 task: type: BitextMining - dataset: config: rus_Cyrl-glg_Latn name: MTEB FloresBitextMining (rus_Cyrl-glg_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.46837944664031 - type: main_score value: 98.46837944664031 - type: precision value: 98.3201581027668 - type: recall value: 98.81422924901186 task: type: BitextMining - dataset: config: rus_Cyrl-kas_Deva name: MTEB FloresBitextMining (rus_Cyrl-kas_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 69.1699604743083 - type: f1 value: 63.05505292906477 - type: main_score value: 63.05505292906477 - type: precision value: 60.62594108789761 - type: recall value: 69.1699604743083 task: type: BitextMining - dataset: config: rus_Cyrl-ltz_Latn name: MTEB FloresBitextMining (rus_Cyrl-ltz_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.40316205533597 - type: f1 value: 89.26571616789009 - type: main_score value: 89.26571616789009 - type: precision value: 88.40179747788443 - type: recall value: 91.40316205533597 task: type: BitextMining - dataset: config: rus_Cyrl-nus_Latn name: MTEB FloresBitextMining (rus_Cyrl-nus_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 38.93280632411067 - type: f1 value: 33.98513032905371 - type: main_score value: 33.98513032905371 - type: precision value: 32.56257884802308 - type: recall value: 38.93280632411067 task: type: BitextMining - dataset: config: rus_Cyrl-slk_Latn name: MTEB FloresBitextMining (rus_Cyrl-slk_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.42094861660078 - type: main_score value: 97.42094861660078 - type: precision value: 97.14262187088273 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: rus_Cyrl-tir_Ethi name: MTEB FloresBitextMining (rus_Cyrl-tir_Ethi) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.30434782608695 - type: f1 value: 88.78129117259552 - type: main_score value: 88.78129117259552 - type: precision value: 87.61528326745717 - type: recall value: 91.30434782608695 task: type: BitextMining - dataset: config: rus_Cyrl-zho_Hant name: MTEB FloresBitextMining (rus_Cyrl-zho_Hant) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-awa_Deva name: MTEB FloresBitextMining (rus_Cyrl-awa_Deva) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.70092226613966 - type: main_score value: 97.70092226613966 - type: precision value: 97.50494071146245 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: rus_Cyrl-cym_Latn name: MTEB FloresBitextMining (rus_Cyrl-cym_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.94861660079052 - type: f1 value: 94.74308300395256 - type: main_score value: 94.74308300395256 - type: precision value: 94.20289855072464 - type: recall value: 95.94861660079052 task: type: BitextMining - dataset: config: rus_Cyrl-grn_Latn name: MTEB FloresBitextMining (rus_Cyrl-grn_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 77.96442687747036 - type: f1 value: 73.64286789187975 - type: main_score value: 73.64286789187975 - type: precision value: 71.99324893260821 - type: recall value: 77.96442687747036 task: type: BitextMining - dataset: config: rus_Cyrl-kat_Geor name: MTEB FloresBitextMining (rus_Cyrl-kat_Geor) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380764 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: rus_Cyrl-lua_Latn name: MTEB FloresBitextMining (rus_Cyrl-lua_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 72.03557312252964 - type: f1 value: 67.23928163404449 - type: main_score value: 67.23928163404449 - type: precision value: 65.30797101449275 - type: recall value: 72.03557312252964 task: type: BitextMining - dataset: config: rus_Cyrl-nya_Latn name: MTEB FloresBitextMining (rus_Cyrl-nya_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.29249011857708 - type: f1 value: 90.0494071146245 - type: main_score value: 90.0494071146245 - type: precision value: 89.04808959156786 - type: recall value: 92.29249011857708 task: type: BitextMining - dataset: config: rus_Cyrl-slv_Latn name: MTEB FloresBitextMining (rus_Cyrl-slv_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: rus_Cyrl-tpi_Latn name: MTEB FloresBitextMining (rus_Cyrl-tpi_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.53359683794467 - type: f1 value: 76.59481822525301 - type: main_score value: 76.59481822525301 - type: precision value: 75.12913223140497 - type: recall value: 80.53359683794467 task: type: BitextMining - dataset: config: rus_Cyrl-zsm_Latn name: MTEB FloresBitextMining (rus_Cyrl-zsm_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.58620365142104 - type: main_score value: 96.58620365142104 - type: precision value: 96.26152832674572 - type: recall value: 97.33201581027669 task: type: BitextMining - dataset: config: rus_Cyrl-ayr_Latn name: MTEB FloresBitextMining (rus_Cyrl-ayr_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 45.55335968379446 - type: f1 value: 40.13076578531388 - type: main_score value: 40.13076578531388 - type: precision value: 38.398064362362355 - type: recall value: 45.55335968379446 task: type: BitextMining - dataset: config: rus_Cyrl-dan_Latn name: MTEB FloresBitextMining (rus_Cyrl-dan_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: rus_Cyrl-guj_Gujr name: MTEB FloresBitextMining (rus_Cyrl-guj_Gujr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: rus_Cyrl-kaz_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-kaz_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.43544137022398 - type: main_score value: 98.43544137022398 - type: precision value: 98.25428194993412 - type: recall value: 98.81422924901186 task: type: BitextMining - dataset: config: rus_Cyrl-lug_Latn name: MTEB FloresBitextMining (rus_Cyrl-lug_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.21343873517787 - type: f1 value: 77.97485726833554 - type: main_score value: 77.97485726833554 - type: precision value: 76.22376717485415 - type: recall value: 82.21343873517787 task: type: BitextMining - dataset: config: rus_Cyrl-oci_Latn name: MTEB FloresBitextMining (rus_Cyrl-oci_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.87351778656127 - type: f1 value: 92.25319969885187 - type: main_score value: 92.25319969885187 - type: precision value: 91.5638528138528 - type: recall value: 93.87351778656127 task: type: BitextMining - dataset: config: rus_Cyrl-smo_Latn name: MTEB FloresBitextMining (rus_Cyrl-smo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 84.88142292490119 - type: f1 value: 81.24364765669114 - type: main_score value: 81.24364765669114 - type: precision value: 79.69991416137661 - type: recall value: 84.88142292490119 task: type: BitextMining - dataset: config: rus_Cyrl-tsn_Latn name: MTEB FloresBitextMining (rus_Cyrl-tsn_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.05533596837944 - type: f1 value: 83.90645586297761 - type: main_score value: 83.90645586297761 - type: precision value: 82.56752305665349 - type: recall value: 87.05533596837944 task: type: BitextMining - dataset: config: rus_Cyrl-zul_Latn name: MTEB FloresBitextMining (rus_Cyrl-zul_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.15810276679841 - type: f1 value: 93.77140974967062 - type: main_score value: 93.77140974967062 - type: precision value: 93.16534914361002 - type: recall value: 95.15810276679841 task: type: BitextMining - dataset: config: rus_Cyrl-azb_Arab name: MTEB FloresBitextMining (rus_Cyrl-azb_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.91699604743083 - type: f1 value: 77.18050065876152 - type: main_score value: 77.18050065876152 - type: precision value: 75.21519543258673 - type: recall value: 81.91699604743083 task: type: BitextMining - dataset: config: rus_Cyrl-deu_Latn name: MTEB FloresBitextMining (rus_Cyrl-deu_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.50592885375494 - type: f1 value: 99.34123847167325 - type: main_score value: 99.34123847167325 - type: precision value: 99.2588932806324 - type: recall value: 99.50592885375494 task: type: BitextMining - dataset: config: rus_Cyrl-hat_Latn name: MTEB FloresBitextMining (rus_Cyrl-hat_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.00790513833992 - type: f1 value: 88.69126043039086 - type: main_score value: 88.69126043039086 - type: precision value: 87.75774044795784 - type: recall value: 91.00790513833992 task: type: BitextMining - dataset: config: rus_Cyrl-kbp_Latn name: MTEB FloresBitextMining (rus_Cyrl-kbp_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 47.233201581027664 - type: f1 value: 43.01118618096943 - type: main_score value: 43.01118618096943 - type: precision value: 41.739069205043556 - type: recall value: 47.233201581027664 task: type: BitextMining - dataset: config: rus_Cyrl-luo_Latn name: MTEB FloresBitextMining (rus_Cyrl-luo_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 60.47430830039525 - type: f1 value: 54.83210565429816 - type: main_score value: 54.83210565429816 - type: precision value: 52.81630744284779 - type: recall value: 60.47430830039525 task: type: BitextMining - dataset: config: rus_Cyrl-ory_Orya name: MTEB FloresBitextMining (rus_Cyrl-ory_Orya) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.83069828722003 - type: main_score value: 98.83069828722003 - type: precision value: 98.69894598155467 - type: recall value: 99.1106719367589 task: type: BitextMining - dataset: config: rus_Cyrl-sna_Latn name: MTEB FloresBitextMining (rus_Cyrl-sna_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.72332015810277 - type: f1 value: 87.30013645774514 - type: main_score value: 87.30013645774514 - type: precision value: 86.25329380764163 - type: recall value: 89.72332015810277 task: type: BitextMining - dataset: config: rus_Cyrl-tso_Latn name: MTEB FloresBitextMining (rus_Cyrl-tso_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 84.38735177865613 - type: f1 value: 80.70424744337788 - type: main_score value: 80.70424744337788 - type: precision value: 79.18560606060606 - type: recall value: 84.38735177865613 task: type: BitextMining - dataset: config: rus_Cyrl-azj_Latn name: MTEB FloresBitextMining (rus_Cyrl-azj_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.56455862977602 - type: main_score value: 96.56455862977602 - type: precision value: 96.23682476943345 - type: recall value: 97.33201581027669 task: type: BitextMining - dataset: config: rus_Cyrl-dik_Latn name: MTEB FloresBitextMining (rus_Cyrl-dik_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 46.047430830039524 - type: f1 value: 40.05513069495283 - type: main_score value: 40.05513069495283 - type: precision value: 38.072590197096126 - type: recall value: 46.047430830039524 task: type: BitextMining - dataset: config: rus_Cyrl-hau_Latn name: MTEB FloresBitextMining (rus_Cyrl-hau_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.94466403162056 - type: f1 value: 84.76943346508563 - type: main_score value: 84.76943346508563 - type: precision value: 83.34486166007905 - type: recall value: 87.94466403162056 task: type: BitextMining - dataset: config: rus_Cyrl-kea_Latn name: MTEB FloresBitextMining (rus_Cyrl-kea_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.42687747035573 - type: f1 value: 86.83803021747684 - type: main_score value: 86.83803021747684 - type: precision value: 85.78416149068323 - type: recall value: 89.42687747035573 task: type: BitextMining - dataset: config: rus_Cyrl-lus_Latn name: MTEB FloresBitextMining (rus_Cyrl-lus_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.97233201581028 - type: f1 value: 64.05480726292745 - type: main_score value: 64.05480726292745 - type: precision value: 62.42670749487858 - type: recall value: 68.97233201581028 task: type: BitextMining - dataset: config: rus_Cyrl-pag_Latn name: MTEB FloresBitextMining (rus_Cyrl-pag_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 78.75494071146245 - type: f1 value: 74.58573558401933 - type: main_score value: 74.58573558401933 - type: precision value: 73.05532028358115 - type: recall value: 78.75494071146245 task: type: BitextMining - dataset: config: rus_Cyrl-snd_Arab name: MTEB FloresBitextMining (rus_Cyrl-snd_Arab) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 94.56521739130434 - type: main_score value: 94.56521739130434 - type: precision value: 93.97233201581028 - type: recall value: 95.8498023715415 task: type: BitextMining - dataset: config: rus_Cyrl-tuk_Latn name: MTEB FloresBitextMining (rus_Cyrl-tuk_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.08300395256917 - type: f1 value: 62.93565240205557 - type: main_score value: 62.93565240205557 - type: precision value: 61.191590257043934 - type: recall value: 68.08300395256917 task: type: BitextMining - dataset: config: rus_Cyrl-bak_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-bak_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.04743083003953 - type: f1 value: 94.86824769433464 - type: main_score value: 94.86824769433464 - type: precision value: 94.34288537549406 - type: recall value: 96.04743083003953 task: type: BitextMining - dataset: config: rus_Cyrl-dyu_Latn name: MTEB FloresBitextMining (rus_Cyrl-dyu_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 37.45059288537549 - type: f1 value: 31.670482312800807 - type: main_score value: 31.670482312800807 - type: precision value: 29.99928568357422 - type: recall value: 37.45059288537549 task: type: BitextMining - dataset: config: rus_Cyrl-heb_Hebr name: MTEB FloresBitextMining (rus_Cyrl-heb_Hebr) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.38998682476942 - type: main_score value: 96.38998682476942 - type: precision value: 95.99802371541502 - type: recall value: 97.23320158102767 task: type: BitextMining - dataset: config: rus_Cyrl-khk_Cyrl name: MTEB FloresBitextMining (rus_Cyrl-khk_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.00724637681158 - type: main_score value: 98.00724637681158 - type: precision value: 97.82938076416336 - type: recall value: 98.41897233201581 task: type: BitextMining - dataset: config: rus_Cyrl-lvs_Latn name: MTEB FloresBitextMining (rus_Cyrl-lvs_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.61396574440053 - type: main_score value: 96.61396574440053 - type: precision value: 96.2203557312253 - type: recall value: 97.4308300395257 task: type: BitextMining - dataset: config: rus_Cyrl-pan_Guru name: MTEB FloresBitextMining (rus_Cyrl-pan_Guru) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034256 - type: main_score value: 99.07773386034256 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: rus_Cyrl-som_Latn name: MTEB FloresBitextMining (rus_Cyrl-som_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.74703557312253 - type: f1 value: 84.52898550724638 - type: main_score value: 84.52898550724638 - type: precision value: 83.09288537549409 - type: recall value: 87.74703557312253 task: type: BitextMining - dataset: config: rus_Cyrl-tum_Latn name: MTEB FloresBitextMining (rus_Cyrl-tum_Latn) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.15415019762845 - type: f1 value: 83.85069640504425 - type: main_score value: 83.85069640504425 - type: precision value: 82.43671183888576 - type: recall value: 87.15415019762845 task: type: BitextMining - dataset: config: taq_Latn-rus_Cyrl name: MTEB FloresBitextMining (taq_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 28.55731225296443 - type: f1 value: 26.810726360049568 - type: main_score value: 26.810726360049568 - type: precision value: 26.260342858265577 - type: recall value: 28.55731225296443 task: type: BitextMining - dataset: config: war_Latn-rus_Cyrl name: MTEB FloresBitextMining (war_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.86166007905138 - type: f1 value: 94.03147083483051 - type: main_score value: 94.03147083483051 - type: precision value: 93.70653606003322 - type: recall value: 94.86166007905138 task: type: BitextMining - dataset: config: arb_Arab-rus_Cyrl name: MTEB FloresBitextMining (arb_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.34387351778656 - type: f1 value: 95.23056653491436 - type: main_score value: 95.23056653491436 - type: precision value: 94.70520421607378 - type: recall value: 96.34387351778656 task: type: BitextMining - dataset: config: bul_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (bul_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 task: type: BitextMining - dataset: config: fra_Latn-rus_Cyrl name: MTEB FloresBitextMining (fra_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: jpn_Jpan-rus_Cyrl name: MTEB FloresBitextMining (jpn_Jpan-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368905 - type: main_score value: 97.76021080368905 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 task: type: BitextMining - dataset: config: lij_Latn-rus_Cyrl name: MTEB FloresBitextMining (lij_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 83.49802371541502 - type: f1 value: 81.64800059239636 - type: main_score value: 81.64800059239636 - type: precision value: 80.9443055878478 - type: recall value: 83.49802371541502 task: type: BitextMining - dataset: config: mya_Mymr-rus_Cyrl name: MTEB FloresBitextMining (mya_Mymr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 88.76776366313682 - type: main_score value: 88.76776366313682 - type: precision value: 88.18370446119435 - type: recall value: 90.21739130434783 task: type: BitextMining - dataset: config: sag_Latn-rus_Cyrl name: MTEB FloresBitextMining (sag_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 41.699604743083 - type: f1 value: 39.53066322643847 - type: main_score value: 39.53066322643847 - type: precision value: 38.822876239229274 - type: recall value: 41.699604743083 task: type: BitextMining - dataset: config: taq_Tfng-rus_Cyrl name: MTEB FloresBitextMining (taq_Tfng-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 10.67193675889328 - type: f1 value: 9.205744965817951 - type: main_score value: 9.205744965817951 - type: precision value: 8.85195219073817 - type: recall value: 10.67193675889328 task: type: BitextMining - dataset: config: wol_Latn-rus_Cyrl name: MTEB FloresBitextMining (wol_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 63.537549407114625 - type: f1 value: 60.65190727391827 - type: main_score value: 60.65190727391827 - type: precision value: 59.61144833427442 - type: recall value: 63.537549407114625 task: type: BitextMining - dataset: config: arb_Latn-rus_Cyrl name: MTEB FloresBitextMining (arb_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 13.142292490118576 - type: f1 value: 12.372910318176764 - type: main_score value: 12.372910318176764 - type: precision value: 12.197580895919188 - type: recall value: 13.142292490118576 task: type: BitextMining - dataset: config: cat_Latn-rus_Cyrl name: MTEB FloresBitextMining (cat_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.80599472990777 - type: main_score value: 98.80599472990777 - type: precision value: 98.72953133822698 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: fur_Latn-rus_Cyrl name: MTEB FloresBitextMining (fur_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.02766798418972 - type: f1 value: 79.36184294084613 - type: main_score value: 79.36184294084613 - type: precision value: 78.69187826527705 - type: recall value: 81.02766798418972 task: type: BitextMining - dataset: config: kab_Latn-rus_Cyrl name: MTEB FloresBitextMining (kab_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 34.387351778656125 - type: f1 value: 32.02306921576947 - type: main_score value: 32.02306921576947 - type: precision value: 31.246670347137467 - type: recall value: 34.387351778656125 task: type: BitextMining - dataset: config: lim_Latn-rus_Cyrl name: MTEB FloresBitextMining (lim_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 78.26086956521739 - type: f1 value: 75.90239449214359 - type: main_score value: 75.90239449214359 - type: precision value: 75.02211430745493 - type: recall value: 78.26086956521739 task: type: BitextMining - dataset: config: nld_Latn-rus_Cyrl name: MTEB FloresBitextMining (nld_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: san_Deva-rus_Cyrl name: MTEB FloresBitextMining (san_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.94466403162056 - type: f1 value: 86.68928897189767 - type: main_score value: 86.68928897189767 - type: precision value: 86.23822997079216 - type: recall value: 87.94466403162056 task: type: BitextMining - dataset: config: tat_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (tat_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.03557312252964 - type: f1 value: 96.4167365353136 - type: main_score value: 96.4167365353136 - type: precision value: 96.16847826086958 - type: recall value: 97.03557312252964 task: type: BitextMining - dataset: config: xho_Latn-rus_Cyrl name: MTEB FloresBitextMining (xho_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.95652173913044 - type: f1 value: 85.5506497283435 - type: main_score value: 85.5506497283435 - type: precision value: 84.95270479733395 - type: recall value: 86.95652173913044 task: type: BitextMining - dataset: config: ars_Arab-rus_Cyrl name: MTEB FloresBitextMining (ars_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 96.6403162055336 - type: f1 value: 95.60935441370223 - type: main_score value: 95.60935441370223 - type: precision value: 95.13339920948617 - type: recall value: 96.6403162055336 task: type: BitextMining - dataset: config: ceb_Latn-rus_Cyrl name: MTEB FloresBitextMining (ceb_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.7509881422925 - type: f1 value: 95.05209198303827 - type: main_score value: 95.05209198303827 - type: precision value: 94.77662283368805 - type: recall value: 95.7509881422925 task: type: BitextMining - dataset: config: fuv_Latn-rus_Cyrl name: MTEB FloresBitextMining (fuv_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 45.25691699604743 - type: f1 value: 42.285666666742365 - type: main_score value: 42.285666666742365 - type: precision value: 41.21979853402283 - type: recall value: 45.25691699604743 task: type: BitextMining - dataset: config: kac_Latn-rus_Cyrl name: MTEB FloresBitextMining (kac_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 34.683794466403164 - type: f1 value: 33.3235346229031 - type: main_score value: 33.3235346229031 - type: precision value: 32.94673924616852 - type: recall value: 34.683794466403164 task: type: BitextMining - dataset: config: lin_Latn-rus_Cyrl name: MTEB FloresBitextMining (lin_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.85770750988142 - type: f1 value: 85.1867110799439 - type: main_score value: 85.1867110799439 - type: precision value: 84.53038212173273 - type: recall value: 86.85770750988142 task: type: BitextMining - dataset: config: nno_Latn-rus_Cyrl name: MTEB FloresBitextMining (nno_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.78383210991906 - type: main_score value: 96.78383210991906 - type: precision value: 96.51185770750989 - type: recall value: 97.4308300395257 task: type: BitextMining - dataset: config: sat_Olck-rus_Cyrl name: MTEB FloresBitextMining (sat_Olck-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 1.185770750988142 - type: f1 value: 1.0279253129117258 - type: main_score value: 1.0279253129117258 - type: precision value: 1.0129746819135175 - type: recall value: 1.185770750988142 task: type: BitextMining - dataset: config: tel_Telu-rus_Cyrl name: MTEB FloresBitextMining (tel_Telu-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.61198945981555 - type: main_score value: 97.61198945981555 - type: precision value: 97.401185770751 - type: recall value: 98.12252964426878 task: type: BitextMining - dataset: config: ydd_Hebr-rus_Cyrl name: MTEB FloresBitextMining (ydd_Hebr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 75.8893280632411 - type: f1 value: 74.00244008018511 - type: main_score value: 74.00244008018511 - type: precision value: 73.25683020960382 - type: recall value: 75.8893280632411 task: type: BitextMining - dataset: config: ary_Arab-rus_Cyrl name: MTEB FloresBitextMining (ary_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.56126482213439 - type: f1 value: 83.72796285839765 - type: main_score value: 83.72796285839765 - type: precision value: 82.65014273166447 - type: recall value: 86.56126482213439 task: type: BitextMining - dataset: config: ces_Latn-rus_Cyrl name: MTEB FloresBitextMining (ces_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 task: type: BitextMining - dataset: config: gaz_Latn-rus_Cyrl name: MTEB FloresBitextMining (gaz_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 42.58893280632411 - type: f1 value: 40.75832866805978 - type: main_score value: 40.75832866805978 - type: precision value: 40.14285046917723 - type: recall value: 42.58893280632411 task: type: BitextMining - dataset: config: kam_Latn-rus_Cyrl name: MTEB FloresBitextMining (kam_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 45.25691699604743 - type: f1 value: 42.6975518029456 - type: main_score value: 42.6975518029456 - type: precision value: 41.87472710984596 - type: recall value: 45.25691699604743 task: type: BitextMining - dataset: config: lit_Latn-rus_Cyrl name: MTEB FloresBitextMining (lit_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.62384716732542 - type: main_score value: 96.62384716732542 - type: precision value: 96.3175230566535 - type: recall value: 97.33201581027669 task: type: BitextMining - dataset: config: nob_Latn-rus_Cyrl name: MTEB FloresBitextMining (nob_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: scn_Latn-rus_Cyrl name: MTEB FloresBitextMining (scn_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 70.45454545454545 - type: f1 value: 68.62561022640075 - type: main_score value: 68.62561022640075 - type: precision value: 67.95229103411222 - type: recall value: 70.45454545454545 task: type: BitextMining - dataset: config: tgk_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (tgk_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 91.58514492753623 - type: main_score value: 91.58514492753623 - type: precision value: 91.24759298672342 - type: recall value: 92.4901185770751 task: type: BitextMining - dataset: config: yor_Latn-rus_Cyrl name: MTEB FloresBitextMining (yor_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 67.98418972332016 - type: f1 value: 64.72874247330768 - type: main_score value: 64.72874247330768 - type: precision value: 63.450823399938685 - type: recall value: 67.98418972332016 task: type: BitextMining - dataset: config: arz_Arab-rus_Cyrl name: MTEB FloresBitextMining (arz_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 94.56521739130434 - type: f1 value: 93.07971014492755 - type: main_score value: 93.07971014492755 - type: precision value: 92.42753623188406 - type: recall value: 94.56521739130434 task: type: BitextMining - dataset: config: cjk_Latn-rus_Cyrl name: MTEB FloresBitextMining (cjk_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 38.63636363636363 - type: f1 value: 36.25747140862938 - type: main_score value: 36.25747140862938 - type: precision value: 35.49101355074723 - type: recall value: 38.63636363636363 task: type: BitextMining - dataset: config: gla_Latn-rus_Cyrl name: MTEB FloresBitextMining (gla_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 69.26877470355731 - type: f1 value: 66.11797423328613 - type: main_score value: 66.11797423328613 - type: precision value: 64.89369649409694 - type: recall value: 69.26877470355731 task: type: BitextMining - dataset: config: kan_Knda-rus_Cyrl name: MTEB FloresBitextMining (kan_Knda-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.51505740636176 - type: main_score value: 97.51505740636176 - type: precision value: 97.30731225296442 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: lmo_Latn-rus_Cyrl name: MTEB FloresBitextMining (lmo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 73.3201581027668 - type: f1 value: 71.06371608677273 - type: main_score value: 71.06371608677273 - type: precision value: 70.26320288266223 - type: recall value: 73.3201581027668 task: type: BitextMining - dataset: config: npi_Deva-rus_Cyrl name: MTEB FloresBitextMining (npi_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.36645107198466 - type: main_score value: 97.36645107198466 - type: precision value: 97.1772068511199 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: shn_Mymr-rus_Cyrl name: MTEB FloresBitextMining (shn_Mymr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 39.426877470355734 - type: f1 value: 37.16728785513024 - type: main_score value: 37.16728785513024 - type: precision value: 36.56918548278505 - type: recall value: 39.426877470355734 task: type: BitextMining - dataset: config: tgl_Latn-rus_Cyrl name: MTEB FloresBitextMining (tgl_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.6378693769998 - type: main_score value: 97.6378693769998 - type: precision value: 97.55371440154047 - type: recall value: 97.92490118577075 task: type: BitextMining - dataset: config: yue_Hant-rus_Cyrl name: MTEB FloresBitextMining (yue_Hant-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.3833051006964 - type: main_score value: 97.3833051006964 - type: precision value: 97.1590909090909 - type: recall value: 97.92490118577075 task: type: BitextMining - dataset: config: asm_Beng-rus_Cyrl name: MTEB FloresBitextMining (asm_Beng-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.78656126482213 - type: f1 value: 91.76917395296842 - type: main_score value: 91.76917395296842 - type: precision value: 91.38292866553736 - type: recall value: 92.78656126482213 task: type: BitextMining - dataset: config: ckb_Arab-rus_Cyrl name: MTEB FloresBitextMining (ckb_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.8300395256917 - type: f1 value: 79.17664345468799 - type: main_score value: 79.17664345468799 - type: precision value: 78.5622171683459 - type: recall value: 80.8300395256917 task: type: BitextMining - dataset: config: gle_Latn-rus_Cyrl name: MTEB FloresBitextMining (gle_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.86956521739131 - type: f1 value: 84.45408265372492 - type: main_score value: 84.45408265372492 - type: precision value: 83.8774340026703 - type: recall value: 85.86956521739131 task: type: BitextMining - dataset: config: kas_Arab-rus_Cyrl name: MTEB FloresBitextMining (kas_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 76.28458498023716 - type: f1 value: 74.11216313578267 - type: main_score value: 74.11216313578267 - type: precision value: 73.2491277759584 - type: recall value: 76.28458498023716 task: type: BitextMining - dataset: config: ltg_Latn-rus_Cyrl name: MTEB FloresBitextMining (ltg_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 71.14624505928853 - type: f1 value: 68.69245357723618 - type: main_score value: 68.69245357723618 - type: precision value: 67.8135329666459 - type: recall value: 71.14624505928853 task: type: BitextMining - dataset: config: nso_Latn-rus_Cyrl name: MTEB FloresBitextMining (nso_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.64822134387352 - type: f1 value: 85.98419219986725 - type: main_score value: 85.98419219986725 - type: precision value: 85.32513873917036 - type: recall value: 87.64822134387352 task: type: BitextMining - dataset: config: sin_Sinh-rus_Cyrl name: MTEB FloresBitextMining (sin_Sinh-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.62845849802372 - type: f1 value: 97.10144927536231 - type: main_score value: 97.10144927536231 - type: precision value: 96.87986585219788 - type: recall value: 97.62845849802372 task: type: BitextMining - dataset: config: tha_Thai-rus_Cyrl name: MTEB FloresBitextMining (tha_Thai-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635045 - type: main_score value: 98.28722002635045 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 task: type: BitextMining - dataset: config: zho_Hans-rus_Cyrl name: MTEB FloresBitextMining (zho_Hans-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: ast_Latn-rus_Cyrl name: MTEB FloresBitextMining (ast_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.90649683857505 - type: main_score value: 94.90649683857505 - type: precision value: 94.61352657004831 - type: recall value: 95.65217391304348 task: type: BitextMining - dataset: config: crh_Latn-rus_Cyrl name: MTEB FloresBitextMining (crh_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 93.08300395256917 - type: f1 value: 92.20988998886428 - type: main_score value: 92.20988998886428 - type: precision value: 91.85631013694254 - type: recall value: 93.08300395256917 task: type: BitextMining - dataset: config: glg_Latn-rus_Cyrl name: MTEB FloresBitextMining (glg_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 95.18006148440931 - type: main_score value: 95.18006148440931 - type: precision value: 95.06540560888386 - type: recall value: 95.55335968379447 task: type: BitextMining - dataset: config: kas_Deva-rus_Cyrl name: MTEB FloresBitextMining (kas_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 55.03952569169961 - type: f1 value: 52.19871938895554 - type: main_score value: 52.19871938895554 - type: precision value: 51.17660971469557 - type: recall value: 55.03952569169961 task: type: BitextMining - dataset: config: ltz_Latn-rus_Cyrl name: MTEB FloresBitextMining (ltz_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 87.64822134387352 - type: f1 value: 86.64179841897234 - type: main_score value: 86.64179841897234 - type: precision value: 86.30023235431587 - type: recall value: 87.64822134387352 task: type: BitextMining - dataset: config: nus_Latn-rus_Cyrl name: MTEB FloresBitextMining (nus_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 27.4703557312253 - type: f1 value: 25.703014277858088 - type: main_score value: 25.703014277858088 - type: precision value: 25.194105476917315 - type: recall value: 27.4703557312253 task: type: BitextMining - dataset: config: slk_Latn-rus_Cyrl name: MTEB FloresBitextMining (slk_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.1106719367589 - type: main_score value: 99.1106719367589 - type: precision value: 99.02832674571805 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: tir_Ethi-rus_Cyrl name: MTEB FloresBitextMining (tir_Ethi-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 78.66903754775608 - type: main_score value: 78.66903754775608 - type: precision value: 77.86431694163612 - type: recall value: 80.73122529644269 task: type: BitextMining - dataset: config: zho_Hant-rus_Cyrl name: MTEB FloresBitextMining (zho_Hant-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.66798418972333 - type: main_score value: 97.66798418972333 - type: precision value: 97.40612648221344 - type: recall value: 98.22134387351778 task: type: BitextMining - dataset: config: awa_Deva-rus_Cyrl name: MTEB FloresBitextMining (awa_Deva-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 96.94224857268335 - type: main_score value: 96.94224857268335 - type: precision value: 96.68560606060606 - type: recall value: 97.5296442687747 task: type: BitextMining - dataset: config: cym_Latn-rus_Cyrl name: MTEB FloresBitextMining (cym_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 92.68774703557312 - type: f1 value: 91.69854302097961 - type: main_score value: 91.69854302097961 - type: precision value: 91.31236846157795 - type: recall value: 92.68774703557312 task: type: BitextMining - dataset: config: grn_Latn-rus_Cyrl name: MTEB FloresBitextMining (grn_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 64.13043478260869 - type: f1 value: 61.850586118740004 - type: main_score value: 61.850586118740004 - type: precision value: 61.0049495186209 - type: recall value: 64.13043478260869 task: type: BitextMining - dataset: config: kat_Geor-rus_Cyrl name: MTEB FloresBitextMining (kat_Geor-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.59881422924902 - type: main_score value: 97.59881422924902 - type: precision value: 97.42534036012296 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: lua_Latn-rus_Cyrl name: MTEB FloresBitextMining (lua_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 63.63636363636363 - type: f1 value: 60.9709122526128 - type: main_score value: 60.9709122526128 - type: precision value: 60.03915902282226 - type: recall value: 63.63636363636363 task: type: BitextMining - dataset: config: nya_Latn-rus_Cyrl name: MTEB FloresBitextMining (nya_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 89.2292490118577 - type: f1 value: 87.59723824473149 - type: main_score value: 87.59723824473149 - type: precision value: 86.90172707867349 - type: recall value: 89.2292490118577 task: type: BitextMining - dataset: config: slv_Latn-rus_Cyrl name: MTEB FloresBitextMining (slv_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.74835309617917 - type: main_score value: 98.74835309617917 - type: precision value: 98.63636363636364 - type: recall value: 99.01185770750988 task: type: BitextMining - dataset: config: tpi_Latn-rus_Cyrl name: MTEB FloresBitextMining (tpi_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 77.37154150197628 - type: f1 value: 75.44251611276084 - type: main_score value: 75.44251611276084 - type: precision value: 74.78103665109595 - type: recall value: 77.37154150197628 task: type: BitextMining - dataset: config: zsm_Latn-rus_Cyrl name: MTEB FloresBitextMining (zsm_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.96245059288538 - type: main_score value: 98.96245059288538 - type: precision value: 98.8471673254282 - type: recall value: 99.2094861660079 task: type: BitextMining - dataset: config: ayr_Latn-rus_Cyrl name: MTEB FloresBitextMining (ayr_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 27.766798418972332 - type: f1 value: 26.439103195281312 - type: main_score value: 26.439103195281312 - type: precision value: 26.052655604573964 - type: recall value: 27.766798418972332 task: type: BitextMining - dataset: config: dan_Latn-rus_Cyrl name: MTEB FloresBitextMining (dan_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 task: type: BitextMining - dataset: config: guj_Gujr-rus_Cyrl name: MTEB FloresBitextMining (guj_Gujr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.26449275362317 - type: main_score value: 97.26449275362317 - type: precision value: 97.02498588368154 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: kaz_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (kaz_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 97.03557312252964 - type: main_score value: 97.03557312252964 - type: precision value: 96.85022158342316 - type: recall value: 97.5296442687747 task: type: BitextMining - dataset: config: lug_Latn-rus_Cyrl name: MTEB FloresBitextMining (lug_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 68.57707509881423 - type: f1 value: 65.93361605820395 - type: main_score value: 65.93361605820395 - type: precision value: 64.90348248593789 - type: recall value: 68.57707509881423 task: type: BitextMining - dataset: config: oci_Latn-rus_Cyrl name: MTEB FloresBitextMining (oci_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.26482213438736 - type: f1 value: 85.33176417155623 - type: main_score value: 85.33176417155623 - type: precision value: 85.00208833384637 - type: recall value: 86.26482213438736 task: type: BitextMining - dataset: config: smo_Latn-rus_Cyrl name: MTEB FloresBitextMining (smo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 77.96442687747036 - type: f1 value: 75.70960450188885 - type: main_score value: 75.70960450188885 - type: precision value: 74.8312632736777 - type: recall value: 77.96442687747036 task: type: BitextMining - dataset: config: tsn_Latn-rus_Cyrl name: MTEB FloresBitextMining (tsn_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 84.38735177865613 - type: f1 value: 82.13656376349225 - type: main_score value: 82.13656376349225 - type: precision value: 81.16794543904518 - type: recall value: 84.38735177865613 task: type: BitextMining - dataset: config: zul_Latn-rus_Cyrl name: MTEB FloresBitextMining (zul_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 88.77570602050753 - type: main_score value: 88.77570602050753 - type: precision value: 88.15978104021582 - type: recall value: 90.21739130434783 task: type: BitextMining - dataset: config: azb_Arab-rus_Cyrl name: MTEB FloresBitextMining (azb_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 65.71146245059289 - type: f1 value: 64.18825390221271 - type: main_score value: 64.18825390221271 - type: precision value: 63.66811154793568 - type: recall value: 65.71146245059289 task: type: BitextMining - dataset: config: deu_Latn-rus_Cyrl name: MTEB FloresBitextMining (deu_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 task: type: BitextMining - dataset: config: hat_Latn-rus_Cyrl name: MTEB FloresBitextMining (hat_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 86.7588932806324 - type: f1 value: 85.86738623695146 - type: main_score value: 85.86738623695146 - type: precision value: 85.55235467420822 - type: recall value: 86.7588932806324 task: type: BitextMining - dataset: config: kbp_Latn-rus_Cyrl name: MTEB FloresBitextMining (kbp_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 34.88142292490119 - type: f1 value: 32.16511669463015 - type: main_score value: 32.16511669463015 - type: precision value: 31.432098549546318 - type: recall value: 34.88142292490119 task: type: BitextMining - dataset: config: luo_Latn-rus_Cyrl name: MTEB FloresBitextMining (luo_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 52.27272727272727 - type: f1 value: 49.60489626836975 - type: main_score value: 49.60489626836975 - type: precision value: 48.69639631803339 - type: recall value: 52.27272727272727 task: type: BitextMining - dataset: config: ory_Orya-rus_Cyrl name: MTEB FloresBitextMining (ory_Orya-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.27437417654808 - type: main_score value: 97.27437417654808 - type: precision value: 97.04968944099377 - type: recall value: 97.82608695652173 task: type: BitextMining - dataset: config: sna_Latn-rus_Cyrl name: MTEB FloresBitextMining (sna_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.37549407114624 - type: f1 value: 83.09911316305177 - type: main_score value: 83.09911316305177 - type: precision value: 82.1284950958864 - type: recall value: 85.37549407114624 task: type: BitextMining - dataset: config: tso_Latn-rus_Cyrl name: MTEB FloresBitextMining (tso_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 82.90513833992095 - type: f1 value: 80.28290385503824 - type: main_score value: 80.28290385503824 - type: precision value: 79.23672543237761 - type: recall value: 82.90513833992095 task: type: BitextMining - dataset: config: azj_Latn-rus_Cyrl name: MTEB FloresBitextMining (azj_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.49200075287031 - type: main_score value: 97.49200075287031 - type: precision value: 97.266139657444 - type: recall value: 98.02371541501977 task: type: BitextMining - dataset: config: dik_Latn-rus_Cyrl name: MTEB FloresBitextMining (dik_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 38.43873517786561 - type: f1 value: 35.78152442955223 - type: main_score value: 35.78152442955223 - type: precision value: 34.82424325078237 - type: recall value: 38.43873517786561 task: type: BitextMining - dataset: config: hau_Latn-rus_Cyrl name: MTEB FloresBitextMining (hau_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.42292490118577 - type: f1 value: 79.24612283124593 - type: main_score value: 79.24612283124593 - type: precision value: 78.34736070751448 - type: recall value: 81.42292490118577 task: type: BitextMining - dataset: config: kea_Latn-rus_Cyrl name: MTEB FloresBitextMining (kea_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 81.62055335968378 - type: f1 value: 80.47015182884748 - type: main_score value: 80.47015182884748 - type: precision value: 80.02671028885862 - type: recall value: 81.62055335968378 task: type: BitextMining - dataset: config: lus_Latn-rus_Cyrl name: MTEB FloresBitextMining (lus_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 62.74703557312253 - type: f1 value: 60.53900079111122 - type: main_score value: 60.53900079111122 - type: precision value: 59.80024202850289 - type: recall value: 62.74703557312253 task: type: BitextMining - dataset: config: pag_Latn-rus_Cyrl name: MTEB FloresBitextMining (pag_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 74.01185770750988 - type: f1 value: 72.57280648279529 - type: main_score value: 72.57280648279529 - type: precision value: 71.99952968456789 - type: recall value: 74.01185770750988 task: type: BitextMining - dataset: config: snd_Arab-rus_Cyrl name: MTEB FloresBitextMining (snd_Arab-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 91.30434782608695 - type: f1 value: 90.24653499445358 - type: main_score value: 90.24653499445358 - type: precision value: 89.83134068200232 - type: recall value: 91.30434782608695 task: type: BitextMining - dataset: config: tuk_Latn-rus_Cyrl name: MTEB FloresBitextMining (tuk_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 47.62845849802372 - type: f1 value: 45.812928836644254 - type: main_score value: 45.812928836644254 - type: precision value: 45.23713833170355 - type: recall value: 47.62845849802372 task: type: BitextMining - dataset: config: bak_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (bak_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 95.18904459615922 - type: main_score value: 95.18904459615922 - type: precision value: 94.92812441182006 - type: recall value: 95.8498023715415 task: type: BitextMining - dataset: config: dyu_Latn-rus_Cyrl name: MTEB FloresBitextMining (dyu_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 27.287335193938166 - type: main_score value: 27.287335193938166 - type: precision value: 26.583996026587492 - type: recall value: 29.64426877470356 task: type: BitextMining - dataset: config: heb_Hebr-rus_Cyrl name: MTEB FloresBitextMining (heb_Hebr-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 task: type: BitextMining - dataset: config: khk_Cyrl-rus_Cyrl name: MTEB FloresBitextMining (khk_Cyrl-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 95.15810276679841 - type: f1 value: 94.44009547764487 - type: main_score value: 94.44009547764487 - type: precision value: 94.16579797014579 - type: recall value: 95.15810276679841 task: type: BitextMining - dataset: config: lvs_Latn-rus_Cyrl name: MTEB FloresBitextMining (lvs_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.51467241585817 - type: main_score value: 97.51467241585817 - type: precision value: 97.36166007905138 - type: recall value: 97.92490118577075 task: type: BitextMining - dataset: config: pan_Guru-rus_Cyrl name: MTEB FloresBitextMining (pan_Guru-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.42918313570486 - type: main_score value: 97.42918313570486 - type: precision value: 97.22261434217955 - type: recall value: 97.92490118577075 task: type: BitextMining - dataset: config: som_Latn-rus_Cyrl name: MTEB FloresBitextMining (som_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 75.69169960474308 - type: f1 value: 73.7211667065916 - type: main_score value: 73.7211667065916 - type: precision value: 72.95842401892384 - type: recall value: 75.69169960474308 task: type: BitextMining - dataset: config: tum_Latn-rus_Cyrl name: MTEB FloresBitextMining (tum_Latn-rus_Cyrl) revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e split: devtest type: mteb/flores metrics: - type: accuracy value: 85.67193675889328 - type: f1 value: 82.9296066252588 - type: main_score value: 82.9296066252588 - type: precision value: 81.77330225447936 - type: recall value: 85.67193675889328 task: type: BitextMining - dataset: config: default name: MTEB GeoreviewClassification (default) revision: 3765c0d1de6b7d264bc459433c45e5a75513839c split: test type: ai-forever/georeview-classification metrics: - type: accuracy value: 44.6630859375 - type: f1 value: 42.607425073610536 - type: f1_weighted value: 42.60639474586065 - type: main_score value: 44.6630859375 task: type: Classification - dataset: config: default name: MTEB GeoreviewClusteringP2P (default) revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec split: test type: ai-forever/georeview-clustering-p2p metrics: - type: main_score value: 58.15951247070825 - type: v_measure value: 58.15951247070825 - type: v_measure_std value: 0.6739615788288809 task: type: Clustering - dataset: config: default name: MTEB HeadlineClassification (default) revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb split: test type: ai-forever/headline-classification metrics: - type: accuracy value: 73.935546875 - type: f1 value: 73.8654872186846 - type: f1_weighted value: 73.86733122685095 - type: main_score value: 73.935546875 task: type: Classification - dataset: config: default name: MTEB InappropriatenessClassification (default) revision: 601651fdc45ef243751676e62dd7a19f491c0285 split: test type: ai-forever/inappropriateness-classification metrics: - type: accuracy value: 59.16015624999999 - type: ap value: 55.52276605836938 - type: ap_weighted value: 55.52276605836938 - type: f1 value: 58.614248199637956 - type: f1_weighted value: 58.614248199637956 - type: main_score value: 59.16015624999999 task: type: Classification - dataset: config: default name: MTEB KinopoiskClassification (default) revision: 5911f26666ac11af46cb9c6849d0dc80a378af24 split: test type: ai-forever/kinopoisk-sentiment-classification metrics: - type: accuracy value: 49.959999999999994 - type: f1 value: 48.4900332316098 - type: f1_weighted value: 48.4900332316098 - type: main_score value: 49.959999999999994 task: type: Classification - dataset: config: default name: MTEB LanguageClassification (default) revision: aa56583bf2bc52b0565770607d6fc3faebecf9e2 split: test type: papluca/language-identification metrics: - type: accuracy value: 71.005859375 - type: f1 value: 69.63481100303348 - type: f1_weighted value: 69.64640413409529 - type: main_score value: 71.005859375 task: type: Classification - dataset: config: ru name: MTEB MLSUMClusteringP2P (ru) revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: reciTAL/mlsum metrics: - type: main_score value: 42.11280087032343 - type: v_measure value: 42.11280087032343 - type: v_measure_std value: 6.7619971723605135 task: type: Clustering - dataset: config: ru name: MTEB MLSUMClusteringP2P.v2 (ru) revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: reciTAL/mlsum metrics: - type: main_score value: 43.00112546945811 - type: v_measure value: 43.00112546945811 - type: v_measure_std value: 1.4740560414835675 task: type: Clustering - dataset: config: ru name: MTEB MLSUMClusteringS2S (ru) revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: reciTAL/mlsum metrics: - type: main_score value: 39.81446080575161 - type: v_measure value: 39.81446080575161 - type: v_measure_std value: 7.125661320308298 task: type: Clustering - dataset: config: ru name: MTEB MLSUMClusteringS2S.v2 (ru) revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: reciTAL/mlsum metrics: - type: main_score value: 39.29659668980239 - type: v_measure value: 39.29659668980239 - type: v_measure_std value: 2.6570502923023094 task: type: Clustering - dataset: config: ru name: MTEB MultiLongDocRetrieval (ru) revision: d67138e705d963e346253a80e59676ddb418810a split: dev type: Shitao/MLDR metrics: - type: main_score value: 38.671 - type: map_at_1 value: 30.0 - type: map_at_10 value: 36.123 - type: map_at_100 value: 36.754999999999995 - type: map_at_1000 value: 36.806 - type: map_at_20 value: 36.464 - type: map_at_3 value: 35.25 - type: map_at_5 value: 35.8 - type: mrr_at_1 value: 30.0 - type: mrr_at_10 value: 36.122817460317464 - type: mrr_at_100 value: 36.75467016625293 - type: mrr_at_1000 value: 36.80612724920882 - type: mrr_at_20 value: 36.46359681984682 - type: mrr_at_3 value: 35.25 - type: mrr_at_5 value: 35.800000000000004 - type: nauc_map_at_1000_diff1 value: 55.61987610843598 - type: nauc_map_at_1000_max value: 52.506795017152186 - type: nauc_map_at_1000_std value: 2.95487192066911 - type: nauc_map_at_100_diff1 value: 55.598419532054734 - type: nauc_map_at_100_max value: 52.48192017040307 - type: nauc_map_at_100_std value: 2.930120252521189 - type: nauc_map_at_10_diff1 value: 56.02309155375198 - type: nauc_map_at_10_max value: 52.739573233234424 - type: nauc_map_at_10_std value: 2.4073432421641545 - type: nauc_map_at_1_diff1 value: 52.57059856776112 - type: nauc_map_at_1_max value: 50.55668152952304 - type: nauc_map_at_1_std value: 1.6572084853398048 - type: nauc_map_at_20_diff1 value: 55.75769029917031 - type: nauc_map_at_20_max value: 52.53663737242853 - type: nauc_map_at_20_std value: 2.8489192879814 - type: nauc_map_at_3_diff1 value: 56.90294128342709 - type: nauc_map_at_3_max value: 53.10608389782041 - type: nauc_map_at_3_std value: 1.4909731657889491 - type: nauc_map_at_5_diff1 value: 56.1258315436073 - type: nauc_map_at_5_max value: 52.398078357541564 - type: nauc_map_at_5_std value: 1.8256862015101467 - type: nauc_mrr_at_1000_diff1 value: 55.61987610843598 - type: nauc_mrr_at_1000_max value: 52.506795017152186 - type: nauc_mrr_at_1000_std value: 2.95487192066911 - type: nauc_mrr_at_100_diff1 value: 55.598419532054734 - type: nauc_mrr_at_100_max value: 52.48192017040307 - type: nauc_mrr_at_100_std value: 2.930120252521189 - type: nauc_mrr_at_10_diff1 value: 56.02309155375198 - type: nauc_mrr_at_10_max value: 52.739573233234424 - type: nauc_mrr_at_10_std value: 2.4073432421641545 - type: nauc_mrr_at_1_diff1 value: 52.57059856776112 - type: nauc_mrr_at_1_max value: 50.55668152952304 - type: nauc_mrr_at_1_std value: 1.6572084853398048 - type: nauc_mrr_at_20_diff1 value: 55.75769029917031 - type: nauc_mrr_at_20_max value: 52.53663737242853 - type: nauc_mrr_at_20_std value: 2.8489192879814 - type: nauc_mrr_at_3_diff1 value: 56.90294128342709 - type: nauc_mrr_at_3_max value: 53.10608389782041 - type: nauc_mrr_at_3_std value: 1.4909731657889491 - type: nauc_mrr_at_5_diff1 value: 56.1258315436073 - type: nauc_mrr_at_5_max value: 52.398078357541564 - type: nauc_mrr_at_5_std value: 1.8256862015101467 - type: nauc_ndcg_at_1000_diff1 value: 55.30733548408918 - type: nauc_ndcg_at_1000_max value: 53.51143366189318 - type: nauc_ndcg_at_1000_std value: 7.133789405525702 - type: nauc_ndcg_at_100_diff1 value: 54.32209039488095 - type: nauc_ndcg_at_100_max value: 52.67499334461009 - type: nauc_ndcg_at_100_std value: 6.878823275077807 - type: nauc_ndcg_at_10_diff1 value: 56.266780806997716 - type: nauc_ndcg_at_10_max value: 53.52837255793743 - type: nauc_ndcg_at_10_std value: 3.756832592964262 - type: nauc_ndcg_at_1_diff1 value: 52.57059856776112 - type: nauc_ndcg_at_1_max value: 50.55668152952304 - type: nauc_ndcg_at_1_std value: 1.6572084853398048 - type: nauc_ndcg_at_20_diff1 value: 55.39255420432796 - type: nauc_ndcg_at_20_max value: 52.946114684072235 - type: nauc_ndcg_at_20_std value: 5.414933414031693 - type: nauc_ndcg_at_3_diff1 value: 57.92826624996289 - type: nauc_ndcg_at_3_max value: 53.89907760306972 - type: nauc_ndcg_at_3_std value: 1.6661401245309218 - type: nauc_ndcg_at_5_diff1 value: 56.47508936029308 - type: nauc_ndcg_at_5_max value: 52.66800998045517 - type: nauc_ndcg_at_5_std value: 2.4127296184140423 - type: nauc_precision_at_1000_diff1 value: 57.25924020238401 - type: nauc_precision_at_1000_max value: 65.1132590931922 - type: nauc_precision_at_1000_std value: 40.60788709618145 - type: nauc_precision_at_100_diff1 value: 46.49620002554606 - type: nauc_precision_at_100_max value: 53.02960148167071 - type: nauc_precision_at_100_std value: 28.206028867032863 - type: nauc_precision_at_10_diff1 value: 56.562744749606765 - type: nauc_precision_at_10_max value: 56.00594967783547 - type: nauc_precision_at_10_std value: 8.368379831645163 - type: nauc_precision_at_1_diff1 value: 52.57059856776112 - type: nauc_precision_at_1_max value: 50.55668152952304 - type: nauc_precision_at_1_std value: 1.6572084853398048 - type: nauc_precision_at_20_diff1 value: 53.25915754614111 - type: nauc_precision_at_20_max value: 54.03255118937036 - type: nauc_precision_at_20_std value: 15.161611674272718 - type: nauc_precision_at_3_diff1 value: 60.726785748943854 - type: nauc_precision_at_3_max value: 56.139896875869354 - type: nauc_precision_at_3_std value: 2.2306901035769893 - type: nauc_precision_at_5_diff1 value: 57.1201127525187 - type: nauc_precision_at_5_max value: 53.28665761862506 - type: nauc_precision_at_5_std value: 4.358720050112237 - type: nauc_recall_at_1000_diff1 value: 57.259240202383964 - type: nauc_recall_at_1000_max value: 65.11325909319218 - type: nauc_recall_at_1000_std value: 40.60788709618142 - type: nauc_recall_at_100_diff1 value: 46.49620002554603 - type: nauc_recall_at_100_max value: 53.02960148167071 - type: nauc_recall_at_100_std value: 28.206028867032835 - type: nauc_recall_at_10_diff1 value: 56.562744749606765 - type: nauc_recall_at_10_max value: 56.00594967783549 - type: nauc_recall_at_10_std value: 8.368379831645147 - type: nauc_recall_at_1_diff1 value: 52.57059856776112 - type: nauc_recall_at_1_max value: 50.55668152952304 - type: nauc_recall_at_1_std value: 1.6572084853398048 - type: nauc_recall_at_20_diff1 value: 53.259157546141154 - type: nauc_recall_at_20_max value: 54.03255118937038 - type: nauc_recall_at_20_std value: 15.16161167427274 - type: nauc_recall_at_3_diff1 value: 60.72678574894387 - type: nauc_recall_at_3_max value: 56.13989687586933 - type: nauc_recall_at_3_std value: 2.2306901035770066 - type: nauc_recall_at_5_diff1 value: 57.12011275251864 - type: nauc_recall_at_5_max value: 53.28665761862502 - type: nauc_recall_at_5_std value: 4.3587200501122245 - type: ndcg_at_1 value: 30.0 - type: ndcg_at_10 value: 38.671 - type: ndcg_at_100 value: 42.173 - type: ndcg_at_1000 value: 44.016 - type: ndcg_at_20 value: 39.845000000000006 - type: ndcg_at_3 value: 36.863 - type: ndcg_at_5 value: 37.874 - type: precision_at_1 value: 30.0 - type: precision_at_10 value: 4.65 - type: precision_at_100 value: 0.64 - type: precision_at_1000 value: 0.08 - type: precision_at_20 value: 2.55 - type: precision_at_3 value: 13.833 - type: precision_at_5 value: 8.799999999999999 - type: recall_at_1 value: 30.0 - type: recall_at_10 value: 46.5 - type: recall_at_100 value: 64.0 - type: recall_at_1000 value: 79.5 - type: recall_at_20 value: 51.0 - type: recall_at_3 value: 41.5 - type: recall_at_5 value: 44.0 task: type: Retrieval - dataset: config: rus name: MTEB MultilingualSentimentClassification (rus) revision: 2b9b4d10fc589af67794141fe8cbd3739de1eb33 split: test type: mteb/multilingual-sentiment-classification metrics: - type: accuracy value: 79.52710495963092 - type: ap value: 84.5713457178972 - type: ap_weighted value: 84.5713457178972 - type: f1 value: 77.88661181524105 - type: f1_weighted value: 79.87563079922718 - type: main_score value: 79.52710495963092 task: type: Classification - dataset: config: arb_Arab-rus_Cyrl name: MTEB NTREXBitextMining (arb_Arab-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 86.47971957936905 - type: f1 value: 82.79864240805654 - type: main_score value: 82.79864240805654 - type: precision value: 81.21485800128767 - type: recall value: 86.47971957936905 task: type: BitextMining - dataset: config: bel_Cyrl-rus_Cyrl name: MTEB NTREXBitextMining (bel_Cyrl-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.84226339509264 - type: f1 value: 93.56399067465667 - type: main_score value: 93.56399067465667 - type: precision value: 93.01619095309631 - type: recall value: 94.84226339509264 task: type: BitextMining - dataset: config: ben_Beng-rus_Cyrl name: MTEB NTREXBitextMining (ben_Beng-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.18828242363544 - type: f1 value: 90.42393889620612 - type: main_score value: 90.42393889620612 - type: precision value: 89.67904925153297 - type: recall value: 92.18828242363544 task: type: BitextMining - dataset: config: bos_Latn-rus_Cyrl name: MTEB NTREXBitextMining (bos_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.69203805708563 - type: f1 value: 93.37172425304624 - type: main_score value: 93.37172425304624 - type: precision value: 92.79204521067315 - type: recall value: 94.69203805708563 task: type: BitextMining - dataset: config: bul_Cyrl-rus_Cyrl name: MTEB NTREXBitextMining (bul_Cyrl-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.99549323985978 - type: f1 value: 96.13086296110833 - type: main_score value: 96.13086296110833 - type: precision value: 95.72441996327827 - type: recall value: 96.99549323985978 task: type: BitextMining - dataset: config: ces_Latn-rus_Cyrl name: MTEB NTREXBitextMining (ces_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.90680465142157 - type: main_score value: 94.90680465142157 - type: precision value: 94.44541812719079 - type: recall value: 95.94391587381071 task: type: BitextMining - dataset: config: deu_Latn-rus_Cyrl name: MTEB NTREXBitextMining (deu_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.09414121181773 - type: f1 value: 94.94408279085295 - type: main_score value: 94.94408279085295 - type: precision value: 94.41245201135037 - type: recall value: 96.09414121181773 task: type: BitextMining - dataset: config: ell_Grek-rus_Cyrl name: MTEB NTREXBitextMining (ell_Grek-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.19429143715573 - type: f1 value: 95.12101485561676 - type: main_score value: 95.12101485561676 - type: precision value: 94.60440660991488 - type: recall value: 96.19429143715573 task: type: BitextMining - dataset: config: eng_Latn-rus_Cyrl name: MTEB NTREXBitextMining (eng_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.49474211316975 - type: f1 value: 95.46581777428045 - type: main_score value: 95.46581777428045 - type: precision value: 94.98414288098814 - type: recall value: 96.49474211316975 task: type: BitextMining - dataset: config: fas_Arab-rus_Cyrl name: MTEB NTREXBitextMining (fas_Arab-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.44166249374061 - type: f1 value: 92.92383018972905 - type: main_score value: 92.92383018972905 - type: precision value: 92.21957936905358 - type: recall value: 94.44166249374061 task: type: BitextMining - dataset: config: fin_Latn-rus_Cyrl name: MTEB NTREXBitextMining (fin_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.18828242363544 - type: f1 value: 90.2980661468393 - type: main_score value: 90.2980661468393 - type: precision value: 89.42580537472877 - type: recall value: 92.18828242363544 task: type: BitextMining - dataset: config: fra_Latn-rus_Cyrl name: MTEB NTREXBitextMining (fra_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.84376564847271 - type: f1 value: 94.81054915706895 - type: main_score value: 94.81054915706895 - type: precision value: 94.31369276136427 - type: recall value: 95.84376564847271 task: type: BitextMining - dataset: config: heb_Hebr-rus_Cyrl name: MTEB NTREXBitextMining (heb_Hebr-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.89233850776164 - type: f1 value: 93.42513770655985 - type: main_score value: 93.42513770655985 - type: precision value: 92.73493573693875 - type: recall value: 94.89233850776164 task: type: BitextMining - dataset: config: hin_Deva-rus_Cyrl name: MTEB NTREXBitextMining (hin_Deva-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.23985978968453 - type: f1 value: 91.52816526376867 - type: main_score value: 91.52816526376867 - type: precision value: 90.76745946425466 - type: recall value: 93.23985978968453 task: type: BitextMining - dataset: config: hrv_Latn-rus_Cyrl name: MTEB NTREXBitextMining (hrv_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.99098647971958 - type: f1 value: 92.36354531797697 - type: main_score value: 92.36354531797697 - type: precision value: 91.63228970439788 - type: recall value: 93.99098647971958 task: type: BitextMining - dataset: config: hun_Latn-rus_Cyrl name: MTEB NTREXBitextMining (hun_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.64046069103655 - type: f1 value: 92.05224503421799 - type: main_score value: 92.05224503421799 - type: precision value: 91.33998616973079 - type: recall value: 93.64046069103655 task: type: BitextMining - dataset: config: ind_Latn-rus_Cyrl name: MTEB NTREXBitextMining (ind_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.68753129694541 - type: f1 value: 89.26222667334335 - type: main_score value: 89.26222667334335 - type: precision value: 88.14638624603572 - type: recall value: 91.68753129694541 task: type: BitextMining - dataset: config: jpn_Jpan-rus_Cyrl name: MTEB NTREXBitextMining (jpn_Jpan-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.28693039559339 - type: f1 value: 89.21161763348957 - type: main_score value: 89.21161763348957 - type: precision value: 88.31188340952988 - type: recall value: 91.28693039559339 task: type: BitextMining - dataset: config: kor_Hang-rus_Cyrl name: MTEB NTREXBitextMining (kor_Hang-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.53430145217827 - type: f1 value: 86.88322165788365 - type: main_score value: 86.88322165788365 - type: precision value: 85.73950211030831 - type: recall value: 89.53430145217827 task: type: BitextMining - dataset: config: lit_Latn-rus_Cyrl name: MTEB NTREXBitextMining (lit_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.28542814221332 - type: f1 value: 88.10249103814452 - type: main_score value: 88.10249103814452 - type: precision value: 87.17689323973752 - type: recall value: 90.28542814221332 task: type: BitextMining - dataset: config: mkd_Cyrl-rus_Cyrl name: MTEB NTREXBitextMining (mkd_Cyrl-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.65643703650713 - type: main_score value: 93.65643703650713 - type: precision value: 93.02036387915207 - type: recall value: 95.04256384576865 task: type: BitextMining - dataset: config: nld_Latn-rus_Cyrl name: MTEB NTREXBitextMining (nld_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.39308963445168 - type: f1 value: 94.16207644800535 - type: main_score value: 94.16207644800535 - type: precision value: 93.582516632091 - type: recall value: 95.39308963445168 task: type: BitextMining - dataset: config: pol_Latn-rus_Cyrl name: MTEB NTREXBitextMining (pol_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.7436154231347 - type: f1 value: 94.5067601402103 - type: main_score value: 94.5067601402103 - type: precision value: 93.91587381071608 - type: recall value: 95.7436154231347 task: type: BitextMining - dataset: config: por_Latn-rus_Cyrl name: MTEB NTREXBitextMining (por_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 65.89884827240861 - type: f1 value: 64.61805459419219 - type: main_score value: 64.61805459419219 - type: precision value: 64.07119451106485 - type: recall value: 65.89884827240861 task: type: BitextMining - dataset: config: rus_Cyrl-arb_Arab name: MTEB NTREXBitextMining (rus_Cyrl-arb_Arab) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.2413620430646 - type: f1 value: 92.67663399861698 - type: main_score value: 92.67663399861698 - type: precision value: 91.94625271240193 - type: recall value: 94.2413620430646 task: type: BitextMining - dataset: config: rus_Cyrl-bel_Cyrl name: MTEB NTREXBitextMining (rus_Cyrl-bel_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.89233850776164 - type: f1 value: 93.40343849106993 - type: main_score value: 93.40343849106993 - type: precision value: 92.74077783341679 - type: recall value: 94.89233850776164 task: type: BitextMining - dataset: config: rus_Cyrl-ben_Beng name: MTEB NTREXBitextMining (rus_Cyrl-ben_Beng) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.2914371557336 - type: f1 value: 92.62226673343348 - type: main_score value: 92.62226673343348 - type: precision value: 91.84610248706393 - type: recall value: 94.2914371557336 task: type: BitextMining - dataset: config: rus_Cyrl-bos_Latn name: MTEB NTREXBitextMining (rus_Cyrl-bos_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.69354031046569 - type: f1 value: 94.50418051319403 - type: main_score value: 94.50418051319403 - type: precision value: 93.95843765648473 - type: recall value: 95.69354031046569 task: type: BitextMining - dataset: config: rus_Cyrl-bul_Cyrl name: MTEB NTREXBitextMining (rus_Cyrl-bul_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.89384076114172 - type: f1 value: 94.66199298948423 - type: main_score value: 94.66199298948423 - type: precision value: 94.08028709731263 - type: recall value: 95.89384076114172 task: type: BitextMining - dataset: config: rus_Cyrl-ces_Latn name: MTEB NTREXBitextMining (rus_Cyrl-ces_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.94091136705057 - type: f1 value: 92.3746731207923 - type: main_score value: 92.3746731207923 - type: precision value: 91.66207644800535 - type: recall value: 93.94091136705057 task: type: BitextMining - dataset: config: rus_Cyrl-deu_Latn name: MTEB NTREXBitextMining (rus_Cyrl-deu_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.76214321482223 - type: main_score value: 94.76214321482223 - type: precision value: 94.20380570856285 - type: recall value: 95.94391587381071 task: type: BitextMining - dataset: config: rus_Cyrl-ell_Grek name: MTEB NTREXBitextMining (rus_Cyrl-ell_Grek) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.44316474712068 - type: f1 value: 94.14788849941579 - type: main_score value: 94.14788849941579 - type: precision value: 93.54197963612084 - type: recall value: 95.44316474712068 task: type: BitextMining - dataset: config: rus_Cyrl-eng_Latn name: MTEB NTREXBitextMining (rus_Cyrl-eng_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 98.14722083124687 - type: f1 value: 97.57135703555333 - type: main_score value: 97.57135703555333 - type: precision value: 97.2959439158738 - type: recall value: 98.14722083124687 task: type: BitextMining - dataset: config: rus_Cyrl-fas_Arab name: MTEB NTREXBitextMining (rus_Cyrl-fas_Arab) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.64196294441662 - type: f1 value: 93.24653647137372 - type: main_score value: 93.24653647137372 - type: precision value: 92.60724419963279 - type: recall value: 94.64196294441662 task: type: BitextMining - dataset: config: rus_Cyrl-fin_Latn name: MTEB NTREXBitextMining (rus_Cyrl-fin_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 87.98197295943916 - type: f1 value: 85.23368385912201 - type: main_score value: 85.23368385912201 - type: precision value: 84.08159858835873 - type: recall value: 87.98197295943916 task: type: BitextMining - dataset: config: rus_Cyrl-fra_Latn name: MTEB NTREXBitextMining (rus_Cyrl-fra_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.24436654982473 - type: f1 value: 95.07093974294774 - type: main_score value: 95.07093974294774 - type: precision value: 94.49591053246536 - type: recall value: 96.24436654982473 task: type: BitextMining - dataset: config: rus_Cyrl-heb_Hebr name: MTEB NTREXBitextMining (rus_Cyrl-heb_Hebr) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.08662994491738 - type: f1 value: 88.5161074945752 - type: main_score value: 88.5161074945752 - type: precision value: 87.36187614755467 - type: recall value: 91.08662994491738 task: type: BitextMining - dataset: config: rus_Cyrl-hin_Deva name: MTEB NTREXBitextMining (rus_Cyrl-hin_Deva) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.66382907694876 - type: main_score value: 93.66382907694876 - type: precision value: 93.05291270238692 - type: recall value: 95.04256384576865 task: type: BitextMining - dataset: config: rus_Cyrl-hrv_Latn name: MTEB NTREXBitextMining (rus_Cyrl-hrv_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.14271407110667 - type: f1 value: 93.7481221832749 - type: main_score value: 93.7481221832749 - type: precision value: 93.10930681736892 - type: recall value: 95.14271407110667 task: type: BitextMining - dataset: config: rus_Cyrl-hun_Latn name: MTEB NTREXBitextMining (rus_Cyrl-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.18527791687532 - type: f1 value: 87.61415933423946 - type: main_score value: 87.61415933423946 - type: precision value: 86.5166400394242 - type: recall value: 90.18527791687532 task: type: BitextMining - dataset: config: rus_Cyrl-ind_Latn name: MTEB NTREXBitextMining (rus_Cyrl-ind_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.69053580370556 - type: f1 value: 91.83608746453012 - type: main_score value: 91.83608746453012 - type: precision value: 90.97145718577868 - type: recall value: 93.69053580370556 task: type: BitextMining - dataset: config: rus_Cyrl-jpn_Jpan name: MTEB NTREXBitextMining (rus_Cyrl-jpn_Jpan) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.48422633950926 - type: f1 value: 86.91271033534429 - type: main_score value: 86.91271033534429 - type: precision value: 85.82671626487351 - type: recall value: 89.48422633950926 task: type: BitextMining - dataset: config: rus_Cyrl-kor_Hang name: MTEB NTREXBitextMining (rus_Cyrl-kor_Hang) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 88.4827240861292 - type: f1 value: 85.35080398375342 - type: main_score value: 85.35080398375342 - type: precision value: 83.9588549490903 - type: recall value: 88.4827240861292 task: type: BitextMining - dataset: config: rus_Cyrl-lit_Latn name: MTEB NTREXBitextMining (rus_Cyrl-lit_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.33550325488233 - type: f1 value: 87.68831819157307 - type: main_score value: 87.68831819157307 - type: precision value: 86.51524906407231 - type: recall value: 90.33550325488233 task: type: BitextMining - dataset: config: rus_Cyrl-mkd_Cyrl name: MTEB NTREXBitextMining (rus_Cyrl-mkd_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.90402270071775 - type: main_score value: 94.90402270071775 - type: precision value: 94.43915873810715 - type: recall value: 95.94391587381071 task: type: BitextMining - dataset: config: rus_Cyrl-nld_Latn name: MTEB NTREXBitextMining (rus_Cyrl-nld_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.98948422633951 - type: f1 value: 91.04323151393756 - type: main_score value: 91.04323151393756 - type: precision value: 90.14688699716241 - type: recall value: 92.98948422633951 task: type: BitextMining - dataset: config: rus_Cyrl-pol_Latn name: MTEB NTREXBitextMining (rus_Cyrl-pol_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.34151226840261 - type: f1 value: 92.8726422967785 - type: main_score value: 92.8726422967785 - type: precision value: 92.19829744616925 - type: recall value: 94.34151226840261 task: type: BitextMining - dataset: config: rus_Cyrl-por_Latn name: MTEB NTREXBitextMining (rus_Cyrl-por_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 86.17926890335504 - type: f1 value: 82.7304882287356 - type: main_score value: 82.7304882287356 - type: precision value: 81.28162481817964 - type: recall value: 86.17926890335504 task: type: BitextMining - dataset: config: rus_Cyrl-slk_Latn name: MTEB NTREXBitextMining (rus_Cyrl-slk_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.7391086629945 - type: f1 value: 90.75112669003506 - type: main_score value: 90.75112669003506 - type: precision value: 89.8564513436822 - type: recall value: 92.7391086629945 task: type: BitextMining - dataset: config: rus_Cyrl-slv_Latn name: MTEB NTREXBitextMining (rus_Cyrl-slv_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.8893340010015 - type: f1 value: 91.05992321816058 - type: main_score value: 91.05992321816058 - type: precision value: 90.22589439715128 - type: recall value: 92.8893340010015 task: type: BitextMining - dataset: config: rus_Cyrl-spa_Latn name: MTEB NTREXBitextMining (rus_Cyrl-spa_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.49474211316975 - type: f1 value: 95.4715406442998 - type: main_score value: 95.4715406442998 - type: precision value: 94.9799699549324 - type: recall value: 96.49474211316975 task: type: BitextMining - dataset: config: rus_Cyrl-srp_Cyrl name: MTEB NTREXBitextMining (rus_Cyrl-srp_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 81.07160741111667 - type: f1 value: 76.55687285507015 - type: main_score value: 76.55687285507015 - type: precision value: 74.71886401030116 - type: recall value: 81.07160741111667 task: type: BitextMining - dataset: config: rus_Cyrl-srp_Latn name: MTEB NTREXBitextMining (rus_Cyrl-srp_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.14271407110667 - type: f1 value: 93.73302377809138 - type: main_score value: 93.73302377809138 - type: precision value: 93.06960440660991 - type: recall value: 95.14271407110667 task: type: BitextMining - dataset: config: rus_Cyrl-swa_Latn name: MTEB NTREXBitextMining (rus_Cyrl-swa_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.79218828242364 - type: f1 value: 93.25988983475212 - type: main_score value: 93.25988983475212 - type: precision value: 92.53463528626273 - type: recall value: 94.79218828242364 task: type: BitextMining - dataset: config: rus_Cyrl-swe_Latn name: MTEB NTREXBitextMining (rus_Cyrl-swe_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.58704723752295 - type: main_score value: 93.58704723752295 - type: precision value: 92.91437155733601 - type: recall value: 95.04256384576865 task: type: BitextMining - dataset: config: rus_Cyrl-tam_Taml name: MTEB NTREXBitextMining (rus_Cyrl-tam_Taml) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.28993490235354 - type: f1 value: 91.63912535469872 - type: main_score value: 91.63912535469872 - type: precision value: 90.87738750983617 - type: recall value: 93.28993490235354 task: type: BitextMining - dataset: config: rus_Cyrl-tur_Latn name: MTEB NTREXBitextMining (rus_Cyrl-tur_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.74061091637456 - type: f1 value: 91.96628275746953 - type: main_score value: 91.96628275746953 - type: precision value: 91.15923885828742 - type: recall value: 93.74061091637456 task: type: BitextMining - dataset: config: rus_Cyrl-ukr_Cyrl name: MTEB NTREXBitextMining (rus_Cyrl-ukr_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.99399098647972 - type: f1 value: 94.89567684860624 - type: main_score value: 94.89567684860624 - type: precision value: 94.37072275079286 - type: recall value: 95.99399098647972 task: type: BitextMining - dataset: config: rus_Cyrl-vie_Latn name: MTEB NTREXBitextMining (rus_Cyrl-vie_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.4371557336004 - type: f1 value: 88.98681355366382 - type: main_score value: 88.98681355366382 - type: precision value: 87.89183775663496 - type: recall value: 91.4371557336004 task: type: BitextMining - dataset: config: rus_Cyrl-zho_Hant name: MTEB NTREXBitextMining (rus_Cyrl-zho_Hant) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.7891837756635 - type: f1 value: 90.79047142141783 - type: main_score value: 90.79047142141783 - type: precision value: 89.86980470706058 - type: recall value: 92.7891837756635 task: type: BitextMining - dataset: config: rus_Cyrl-zul_Latn name: MTEB NTREXBitextMining (rus_Cyrl-zul_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 87.43114672008012 - type: f1 value: 84.04618833011422 - type: main_score value: 84.04618833011422 - type: precision value: 82.52259341393041 - type: recall value: 87.43114672008012 task: type: BitextMining - dataset: config: slk_Latn-rus_Cyrl name: MTEB NTREXBitextMining (slk_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.34301452178268 - type: f1 value: 94.20392493502158 - type: main_score value: 94.20392493502158 - type: precision value: 93.67384409948257 - type: recall value: 95.34301452178268 task: type: BitextMining - dataset: config: slv_Latn-rus_Cyrl name: MTEB NTREXBitextMining (slv_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.23835753630446 - type: f1 value: 90.5061759305625 - type: main_score value: 90.5061759305625 - type: precision value: 89.74231188051918 - type: recall value: 92.23835753630446 task: type: BitextMining - dataset: config: spa_Latn-rus_Cyrl name: MTEB NTREXBitextMining (spa_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.54481722583876 - type: f1 value: 95.54665331330328 - type: main_score value: 95.54665331330328 - type: precision value: 95.06342847604739 - type: recall value: 96.54481722583876 task: type: BitextMining - dataset: config: srp_Cyrl-rus_Cyrl name: MTEB NTREXBitextMining (srp_Cyrl-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 83.62543815723585 - type: f1 value: 80.77095672699816 - type: main_score value: 80.77095672699816 - type: precision value: 79.74674313056886 - type: recall value: 83.62543815723585 task: type: BitextMining - dataset: config: srp_Latn-rus_Cyrl name: MTEB NTREXBitextMining (srp_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.44166249374061 - type: f1 value: 93.00733206591994 - type: main_score value: 93.00733206591994 - type: precision value: 92.37203026762366 - type: recall value: 94.44166249374061 task: type: BitextMining - dataset: config: swa_Latn-rus_Cyrl name: MTEB NTREXBitextMining (swa_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.23535302954431 - type: f1 value: 87.89596482636041 - type: main_score value: 87.89596482636041 - type: precision value: 86.87060227370694 - type: recall value: 90.23535302954431 task: type: BitextMining - dataset: config: swe_Latn-rus_Cyrl name: MTEB NTREXBitextMining (swe_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 95.44316474712068 - type: f1 value: 94.1896177599733 - type: main_score value: 94.1896177599733 - type: precision value: 93.61542313470206 - type: recall value: 95.44316474712068 task: type: BitextMining - dataset: config: tam_Taml-rus_Cyrl name: MTEB NTREXBitextMining (tam_Taml-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.68452679018529 - type: f1 value: 87.37341160650037 - type: main_score value: 87.37341160650037 - type: precision value: 86.38389402285247 - type: recall value: 89.68452679018529 task: type: BitextMining - dataset: config: tur_Latn-rus_Cyrl name: MTEB NTREXBitextMining (tur_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.89083625438157 - type: f1 value: 92.33892505424804 - type: main_score value: 92.33892505424804 - type: precision value: 91.63125640842216 - type: recall value: 93.89083625438157 task: type: BitextMining - dataset: config: ukr_Cyrl-rus_Cyrl name: MTEB NTREXBitextMining (ukr_Cyrl-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 96.14421632448673 - type: f1 value: 95.11028447433054 - type: main_score value: 95.11028447433054 - type: precision value: 94.62944416624937 - type: recall value: 96.14421632448673 task: type: BitextMining - dataset: config: vie_Latn-rus_Cyrl name: MTEB NTREXBitextMining (vie_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.79068602904357 - type: f1 value: 92.14989150392256 - type: main_score value: 92.14989150392256 - type: precision value: 91.39292271740945 - type: recall value: 93.79068602904357 task: type: BitextMining - dataset: config: zho_Hant-rus_Cyrl name: MTEB NTREXBitextMining (zho_Hant-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.13370055082625 - type: f1 value: 86.51514618639217 - type: main_score value: 86.51514618639217 - type: precision value: 85.383920035898 - type: recall value: 89.13370055082625 task: type: BitextMining - dataset: config: zul_Latn-rus_Cyrl name: MTEB NTREXBitextMining (zul_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 81.17175763645467 - type: f1 value: 77.72331766047338 - type: main_score value: 77.72331766047338 - type: precision value: 76.24629555848075 - type: recall value: 81.17175763645467 task: type: BitextMining - dataset: config: ru name: MTEB OpusparcusPC (ru) revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a split: test.full type: GEM/opusparcus metrics: - type: cosine_accuracy value: 73.09136420525657 - type: cosine_accuracy_threshold value: 87.70400881767273 - type: cosine_ap value: 86.51938550599533 - type: cosine_f1 value: 80.84358523725834 - type: cosine_f1_threshold value: 86.90648078918457 - type: cosine_precision value: 73.24840764331209 - type: cosine_recall value: 90.19607843137256 - type: dot_accuracy value: 73.09136420525657 - type: dot_accuracy_threshold value: 87.7040147781372 - type: dot_ap value: 86.51934769946833 - type: dot_f1 value: 80.84358523725834 - type: dot_f1_threshold value: 86.90648078918457 - type: dot_precision value: 73.24840764331209 - type: dot_recall value: 90.19607843137256 - type: euclidean_accuracy value: 73.09136420525657 - type: euclidean_accuracy_threshold value: 49.590304493904114 - type: euclidean_ap value: 86.51934769946833 - type: euclidean_f1 value: 80.84358523725834 - type: euclidean_f1_threshold value: 51.173269748687744 - type: euclidean_precision value: 73.24840764331209 - type: euclidean_recall value: 90.19607843137256 - type: main_score value: 86.51976811057995 - type: manhattan_accuracy value: 73.40425531914893 - type: manhattan_accuracy_threshold value: 757.8278541564941 - type: manhattan_ap value: 86.51976811057995 - type: manhattan_f1 value: 80.92898615453328 - type: manhattan_f1_threshold value: 778.3821105957031 - type: manhattan_precision value: 74.32321575061526 - type: manhattan_recall value: 88.8235294117647 - type: max_ap value: 86.51976811057995 - type: max_f1 value: 80.92898615453328 - type: max_precision value: 74.32321575061526 - type: max_recall value: 90.19607843137256 - type: similarity_accuracy value: 73.09136420525657 - type: similarity_accuracy_threshold value: 87.70400881767273 - type: similarity_ap value: 86.51938550599533 - type: similarity_f1 value: 80.84358523725834 - type: similarity_f1_threshold value: 86.90648078918457 - type: similarity_precision value: 73.24840764331209 - type: similarity_recall value: 90.19607843137256 task: type: PairClassification - dataset: config: russian name: MTEB PublicHealthQA (russian) revision: main split: test type: xhluca/publichealth-qa metrics: - type: main_score value: 79.303 - type: map_at_1 value: 61.538000000000004 - type: map_at_10 value: 74.449 - type: map_at_100 value: 74.687 - type: map_at_1000 value: 74.687 - type: map_at_20 value: 74.589 - type: map_at_3 value: 73.333 - type: map_at_5 value: 74.256 - type: mrr_at_1 value: 61.53846153846154 - type: mrr_at_10 value: 74.44871794871794 - type: mrr_at_100 value: 74.68730304304074 - type: mrr_at_1000 value: 74.68730304304074 - type: mrr_at_20 value: 74.58857808857809 - type: mrr_at_3 value: 73.33333333333333 - type: mrr_at_5 value: 74.25641025641025 - type: nauc_map_at_1000_diff1 value: 61.375798048778506 - type: nauc_map_at_1000_max value: 51.37093181241067 - type: nauc_map_at_1000_std value: 41.735794471409015 - type: nauc_map_at_100_diff1 value: 61.375798048778506 - type: nauc_map_at_100_max value: 51.37093181241067 - type: nauc_map_at_100_std value: 41.735794471409015 - type: nauc_map_at_10_diff1 value: 61.12796039757213 - type: nauc_map_at_10_max value: 51.843445267118014 - type: nauc_map_at_10_std value: 42.243121474939365 - type: nauc_map_at_1_diff1 value: 66.39100974909151 - type: nauc_map_at_1_max value: 44.77165601342703 - type: nauc_map_at_1_std value: 32.38542979413408 - type: nauc_map_at_20_diff1 value: 61.16611123434347 - type: nauc_map_at_20_max value: 51.52605092407306 - type: nauc_map_at_20_std value: 41.94787773313971 - type: nauc_map_at_3_diff1 value: 61.40157474408937 - type: nauc_map_at_3_max value: 51.47230077853947 - type: nauc_map_at_3_std value: 42.63540269440141 - type: nauc_map_at_5_diff1 value: 61.07631147583098 - type: nauc_map_at_5_max value: 52.02626939341523 - type: nauc_map_at_5_std value: 42.511607332150334 - type: nauc_mrr_at_1000_diff1 value: 61.375798048778506 - type: nauc_mrr_at_1000_max value: 51.37093181241067 - type: nauc_mrr_at_1000_std value: 41.735794471409015 - type: nauc_mrr_at_100_diff1 value: 61.375798048778506 - type: nauc_mrr_at_100_max value: 51.37093181241067 - type: nauc_mrr_at_100_std value: 41.735794471409015 - type: nauc_mrr_at_10_diff1 value: 61.12796039757213 - type: nauc_mrr_at_10_max value: 51.843445267118014 - type: nauc_mrr_at_10_std value: 42.243121474939365 - type: nauc_mrr_at_1_diff1 value: 66.39100974909151 - type: nauc_mrr_at_1_max value: 44.77165601342703 - type: nauc_mrr_at_1_std value: 32.38542979413408 - type: nauc_mrr_at_20_diff1 value: 61.16611123434347 - type: nauc_mrr_at_20_max value: 51.52605092407306 - type: nauc_mrr_at_20_std value: 41.94787773313971 - type: nauc_mrr_at_3_diff1 value: 61.40157474408937 - type: nauc_mrr_at_3_max value: 51.47230077853947 - type: nauc_mrr_at_3_std value: 42.63540269440141 - type: nauc_mrr_at_5_diff1 value: 61.07631147583098 - type: nauc_mrr_at_5_max value: 52.02626939341523 - type: nauc_mrr_at_5_std value: 42.511607332150334 - type: nauc_ndcg_at_1000_diff1 value: 60.54821630436157 - type: nauc_ndcg_at_1000_max value: 52.584328363863634 - type: nauc_ndcg_at_1000_std value: 43.306961101645946 - type: nauc_ndcg_at_100_diff1 value: 60.54821630436157 - type: nauc_ndcg_at_100_max value: 52.584328363863634 - type: nauc_ndcg_at_100_std value: 43.306961101645946 - type: nauc_ndcg_at_10_diff1 value: 58.800340278109886 - type: nauc_ndcg_at_10_max value: 55.31050771670664 - type: nauc_ndcg_at_10_std value: 46.40931672942848 - type: nauc_ndcg_at_1_diff1 value: 66.39100974909151 - type: nauc_ndcg_at_1_max value: 44.77165601342703 - type: nauc_ndcg_at_1_std value: 32.38542979413408 - type: nauc_ndcg_at_20_diff1 value: 58.88690479697946 - type: nauc_ndcg_at_20_max value: 54.19269661177923 - type: nauc_ndcg_at_20_std value: 45.39305589413174 - type: nauc_ndcg_at_3_diff1 value: 59.61866351451574 - type: nauc_ndcg_at_3_max value: 54.23992718744033 - type: nauc_ndcg_at_3_std value: 46.997379274101 - type: nauc_ndcg_at_5_diff1 value: 58.70739588066225 - type: nauc_ndcg_at_5_max value: 55.76766902539152 - type: nauc_ndcg_at_5_std value: 47.10553115762958 - type: nauc_precision_at_1000_diff1 value: 100.0 - type: nauc_precision_at_1000_max value: 100.0 - type: nauc_precision_at_1000_std value: 100.0 - type: nauc_precision_at_100_diff1 value: .nan - type: nauc_precision_at_100_max value: .nan - type: nauc_precision_at_100_std value: .nan - type: nauc_precision_at_10_diff1 value: 35.72622112397501 - type: nauc_precision_at_10_max value: 89.84297108673948 - type: nauc_precision_at_10_std value: 86.60269192422707 - type: nauc_precision_at_1_diff1 value: 66.39100974909151 - type: nauc_precision_at_1_max value: 44.77165601342703 - type: nauc_precision_at_1_std value: 32.38542979413408 - type: nauc_precision_at_20_diff1 value: 29.188449183726433 - type: nauc_precision_at_20_max value: 86.45729478231968 - type: nauc_precision_at_20_std value: 86.45729478231968 - type: nauc_precision_at_3_diff1 value: 50.294126629236224 - type: nauc_precision_at_3_max value: 68.98223127174579 - type: nauc_precision_at_3_std value: 70.31195520376356 - type: nauc_precision_at_5_diff1 value: 39.648884288124385 - type: nauc_precision_at_5_max value: 86.3409770687935 - type: nauc_precision_at_5_std value: 83.74875373878356 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: .nan - type: nauc_recall_at_100_max value: .nan - type: nauc_recall_at_100_std value: .nan - type: nauc_recall_at_10_diff1 value: 35.72622112397516 - type: nauc_recall_at_10_max value: 89.84297108673968 - type: nauc_recall_at_10_std value: 86.60269192422749 - type: nauc_recall_at_1_diff1 value: 66.39100974909151 - type: nauc_recall_at_1_max value: 44.77165601342703 - type: nauc_recall_at_1_std value: 32.38542979413408 - type: nauc_recall_at_20_diff1 value: 29.188449183726323 - type: nauc_recall_at_20_max value: 86.45729478231985 - type: nauc_recall_at_20_std value: 86.45729478231985 - type: nauc_recall_at_3_diff1 value: 50.29412662923603 - type: nauc_recall_at_3_max value: 68.98223127174562 - type: nauc_recall_at_3_std value: 70.31195520376346 - type: nauc_recall_at_5_diff1 value: 39.64888428812445 - type: nauc_recall_at_5_max value: 86.34097706879359 - type: nauc_recall_at_5_std value: 83.74875373878366 - type: ndcg_at_1 value: 61.538000000000004 - type: ndcg_at_10 value: 79.303 - type: ndcg_at_100 value: 80.557 - type: ndcg_at_1000 value: 80.557 - type: ndcg_at_20 value: 79.732 - type: ndcg_at_3 value: 77.033 - type: ndcg_at_5 value: 78.818 - type: precision_at_1 value: 61.538000000000004 - type: precision_at_10 value: 9.385 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.769 - type: precision_at_3 value: 29.231 - type: precision_at_5 value: 18.462 - type: recall_at_1 value: 61.538000000000004 - type: recall_at_10 value: 93.84599999999999 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 95.38499999999999 - type: recall_at_3 value: 87.69200000000001 - type: recall_at_5 value: 92.308 task: type: Retrieval - dataset: config: default name: MTEB RUParaPhraserSTS (default) revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4 split: test type: merionum/ru_paraphraser metrics: - type: cosine_pearson value: 64.73554596215753 - type: cosine_spearman value: 70.45849652271855 - type: euclidean_pearson value: 68.08069844834267 - type: euclidean_spearman value: 70.45854872959124 - type: main_score value: 70.45849652271855 - type: manhattan_pearson value: 67.88325986519624 - type: manhattan_spearman value: 70.21131896834542 - type: pearson value: 64.73554596215753 - type: spearman value: 70.45849652271855 task: type: STS - dataset: config: default name: MTEB RiaNewsRetrieval (default) revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7 split: test type: ai-forever/ria-news-retrieval metrics: - type: main_score value: 70.00999999999999 - type: map_at_1 value: 55.97 - type: map_at_10 value: 65.59700000000001 - type: map_at_100 value: 66.057 - type: map_at_1000 value: 66.074 - type: map_at_20 value: 65.892 - type: map_at_3 value: 63.74999999999999 - type: map_at_5 value: 64.84299999999999 - type: mrr_at_1 value: 55.88999999999999 - type: mrr_at_10 value: 65.55873015872977 - type: mrr_at_100 value: 66.01891495129716 - type: mrr_at_1000 value: 66.03538391493299 - type: mrr_at_20 value: 65.85351193431555 - type: mrr_at_3 value: 63.7133333333329 - type: mrr_at_5 value: 64.80483333333268 - type: nauc_map_at_1000_diff1 value: 65.95332946436318 - type: nauc_map_at_1000_max value: 28.21204156197811 - type: nauc_map_at_1000_std value: -13.139245767083743 - type: nauc_map_at_100_diff1 value: 65.94763105024367 - type: nauc_map_at_100_max value: 28.212832170078205 - type: nauc_map_at_100_std value: -13.131425849370665 - type: nauc_map_at_10_diff1 value: 65.88455089448388 - type: nauc_map_at_10_max value: 28.13555838776792 - type: nauc_map_at_10_std value: -13.326989827081023 - type: nauc_map_at_1_diff1 value: 69.31275711813979 - type: nauc_map_at_1_max value: 26.386708520283758 - type: nauc_map_at_1_std value: -14.434616447245464 - type: nauc_map_at_20_diff1 value: 65.91227032605677 - type: nauc_map_at_20_max value: 28.20538655600886 - type: nauc_map_at_20_std value: -13.191148834410274 - type: nauc_map_at_3_diff1 value: 66.0051677952641 - type: nauc_map_at_3_max value: 28.25443420019022 - type: nauc_map_at_3_std value: -13.893284109029558 - type: nauc_map_at_5_diff1 value: 65.89784348297898 - type: nauc_map_at_5_max value: 28.26449765184183 - type: nauc_map_at_5_std value: -13.506692912805008 - type: nauc_mrr_at_1000_diff1 value: 66.06599513750889 - type: nauc_mrr_at_1000_max value: 28.191556650722287 - type: nauc_mrr_at_1000_std value: -13.098487982930276 - type: nauc_mrr_at_100_diff1 value: 66.0602307977725 - type: nauc_mrr_at_100_max value: 28.19235936624514 - type: nauc_mrr_at_100_std value: -13.09069677716269 - type: nauc_mrr_at_10_diff1 value: 65.99546819079403 - type: nauc_mrr_at_10_max value: 28.11556170120022 - type: nauc_mrr_at_10_std value: -13.286711073897553 - type: nauc_mrr_at_1_diff1 value: 69.49541040517995 - type: nauc_mrr_at_1_max value: 26.354622707276153 - type: nauc_mrr_at_1_std value: -14.358839778104695 - type: nauc_mrr_at_20_diff1 value: 66.02427154257936 - type: nauc_mrr_at_20_max value: 28.18509383563462 - type: nauc_mrr_at_20_std value: -13.150543398429 - type: nauc_mrr_at_3_diff1 value: 66.11258119082618 - type: nauc_mrr_at_3_max value: 28.239510722224004 - type: nauc_mrr_at_3_std value: -13.857249251136269 - type: nauc_mrr_at_5_diff1 value: 66.00633786765626 - type: nauc_mrr_at_5_max value: 28.244875152193032 - type: nauc_mrr_at_5_std value: -13.467206028704434 - type: nauc_ndcg_at_1000_diff1 value: 65.02876183314446 - type: nauc_ndcg_at_1000_max value: 29.109368390197194 - type: nauc_ndcg_at_1000_std value: -11.56514359821697 - type: nauc_ndcg_at_100_diff1 value: 64.85837726893713 - type: nauc_ndcg_at_100_max value: 29.19990133137256 - type: nauc_ndcg_at_100_std value: -11.17450348161257 - type: nauc_ndcg_at_10_diff1 value: 64.53842705024796 - type: nauc_ndcg_at_10_max value: 28.748734006088526 - type: nauc_ndcg_at_10_std value: -12.331395505957063 - type: nauc_ndcg_at_1_diff1 value: 69.31275711813979 - type: nauc_ndcg_at_1_max value: 26.386708520283758 - type: nauc_ndcg_at_1_std value: -14.434616447245464 - type: nauc_ndcg_at_20_diff1 value: 64.59017606740504 - type: nauc_ndcg_at_20_max value: 29.047332048898017 - type: nauc_ndcg_at_20_std value: -11.746548770195954 - type: nauc_ndcg_at_3_diff1 value: 64.87900935713822 - type: nauc_ndcg_at_3_max value: 28.953157521204403 - type: nauc_ndcg_at_3_std value: -13.639947228880942 - type: nauc_ndcg_at_5_diff1 value: 64.61466953479034 - type: nauc_ndcg_at_5_max value: 29.01899321868392 - type: nauc_ndcg_at_5_std value: -12.85356404799802 - type: nauc_precision_at_1000_diff1 value: 48.85481417002382 - type: nauc_precision_at_1000_max value: 57.129837326696375 - type: nauc_precision_at_1000_std value: 37.889524999906435 - type: nauc_precision_at_100_diff1 value: 53.374672326788264 - type: nauc_precision_at_100_max value: 43.819333062207974 - type: nauc_precision_at_100_std value: 21.387064885769362 - type: nauc_precision_at_10_diff1 value: 57.66571169774445 - type: nauc_precision_at_10_max value: 31.779694837242033 - type: nauc_precision_at_10_std value: -6.6248399147180255 - type: nauc_precision_at_1_diff1 value: 69.31275711813979 - type: nauc_precision_at_1_max value: 26.386708520283758 - type: nauc_precision_at_1_std value: -14.434616447245464 - type: nauc_precision_at_20_diff1 value: 55.93570036001682 - type: nauc_precision_at_20_max value: 34.98640173388743 - type: nauc_precision_at_20_std value: -0.36518465159326174 - type: nauc_precision_at_3_diff1 value: 60.94100093991508 - type: nauc_precision_at_3_max value: 31.422239034357673 - type: nauc_precision_at_3_std value: -12.72576556537896 - type: nauc_precision_at_5_diff1 value: 59.450505195434054 - type: nauc_precision_at_5_max value: 32.07638712418377 - type: nauc_precision_at_5_std value: -10.024459103498598 - type: nauc_recall_at_1000_diff1 value: 48.854814170024184 - type: nauc_recall_at_1000_max value: 57.129837326697164 - type: nauc_recall_at_1000_std value: 37.88952499990672 - type: nauc_recall_at_100_diff1 value: 53.37467232678822 - type: nauc_recall_at_100_max value: 43.8193330622079 - type: nauc_recall_at_100_std value: 21.387064885769398 - type: nauc_recall_at_10_diff1 value: 57.66571169774447 - type: nauc_recall_at_10_max value: 31.779694837242133 - type: nauc_recall_at_10_std value: -6.62483991471789 - type: nauc_recall_at_1_diff1 value: 69.31275711813979 - type: nauc_recall_at_1_max value: 26.386708520283758 - type: nauc_recall_at_1_std value: -14.434616447245464 - type: nauc_recall_at_20_diff1 value: 55.93570036001682 - type: nauc_recall_at_20_max value: 34.986401733887554 - type: nauc_recall_at_20_std value: -0.3651846515931506 - type: nauc_recall_at_3_diff1 value: 60.94100093991499 - type: nauc_recall_at_3_max value: 31.422239034357606 - type: nauc_recall_at_3_std value: -12.725765565378966 - type: nauc_recall_at_5_diff1 value: 59.450505195434125 - type: nauc_recall_at_5_max value: 32.07638712418387 - type: nauc_recall_at_5_std value: -10.024459103498472 - type: ndcg_at_1 value: 55.97 - type: ndcg_at_10 value: 70.00999999999999 - type: ndcg_at_100 value: 72.20100000000001 - type: ndcg_at_1000 value: 72.65599999999999 - type: ndcg_at_20 value: 71.068 - type: ndcg_at_3 value: 66.228 - type: ndcg_at_5 value: 68.191 - type: precision_at_1 value: 55.97 - type: precision_at_10 value: 8.373999999999999 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.097 - type: precision_at_20 value: 4.3950000000000005 - type: precision_at_3 value: 24.46 - type: precision_at_5 value: 15.626000000000001 - type: recall_at_1 value: 55.97 - type: recall_at_10 value: 83.74000000000001 - type: recall_at_100 value: 93.87 - type: recall_at_1000 value: 97.49 - type: recall_at_20 value: 87.89 - type: recall_at_3 value: 73.38 - type: recall_at_5 value: 78.13 task: type: Retrieval - dataset: config: default name: MTEB RuBQReranking (default) revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2 split: test type: ai-forever/rubq-reranking metrics: - type: main_score value: 71.44929565043827 - type: map value: 71.44929565043827 - type: mrr value: 77.78391820945014 - type: nAUC_map_diff1 value: 38.140840668080244 - type: nAUC_map_max value: 27.54328688105381 - type: nAUC_map_std value: 16.81572082284672 - type: nAUC_mrr_diff1 value: 44.51350415961509 - type: nAUC_mrr_max value: 36.491182016669754 - type: nAUC_mrr_std value: 22.47139593052269 task: type: Reranking - dataset: config: default name: MTEB RuBQRetrieval (default) revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b split: test type: ai-forever/rubq-retrieval metrics: - type: main_score value: 68.529 - type: map_at_1 value: 42.529 - type: map_at_10 value: 60.864 - type: map_at_100 value: 61.868 - type: map_at_1000 value: 61.907000000000004 - type: map_at_20 value: 61.596 - type: map_at_3 value: 55.701 - type: map_at_5 value: 58.78 - type: mrr_at_1 value: 60.57919621749409 - type: mrr_at_10 value: 70.55614188149649 - type: mrr_at_100 value: 70.88383816664494 - type: mrr_at_1000 value: 70.89719252668833 - type: mrr_at_20 value: 70.79839750105347 - type: mrr_at_3 value: 68.4594168636722 - type: mrr_at_5 value: 69.67100078802214 - type: nauc_map_at_1000_diff1 value: 40.67438785660885 - type: nauc_map_at_1000_max value: 32.79981738507424 - type: nauc_map_at_1000_std value: -6.873402600044831 - type: nauc_map_at_100_diff1 value: 40.65643664443284 - type: nauc_map_at_100_max value: 32.81594799919249 - type: nauc_map_at_100_std value: -6.8473246794498195 - type: nauc_map_at_10_diff1 value: 40.39048268484908 - type: nauc_map_at_10_max value: 32.403242161479525 - type: nauc_map_at_10_std value: -7.344413799841244 - type: nauc_map_at_1_diff1 value: 44.36306892906905 - type: nauc_map_at_1_max value: 25.61348630699028 - type: nauc_map_at_1_std value: -8.713074613333902 - type: nauc_map_at_20_diff1 value: 40.530326570124615 - type: nauc_map_at_20_max value: 32.74028319323205 - type: nauc_map_at_20_std value: -7.008180779820569 - type: nauc_map_at_3_diff1 value: 40.764924859364044 - type: nauc_map_at_3_max value: 29.809671682025336 - type: nauc_map_at_3_std value: -9.205620202725564 - type: nauc_map_at_5_diff1 value: 40.88599496021476 - type: nauc_map_at_5_max value: 32.1701894666848 - type: nauc_map_at_5_std value: -7.801251849010623 - type: nauc_mrr_at_1000_diff1 value: 48.64181373540728 - type: nauc_mrr_at_1000_max value: 40.136947990653546 - type: nauc_mrr_at_1000_std value: -7.250260497468805 - type: nauc_mrr_at_100_diff1 value: 48.63349902496212 - type: nauc_mrr_at_100_max value: 40.14510559704008 - type: nauc_mrr_at_100_std value: -7.228702374801103 - type: nauc_mrr_at_10_diff1 value: 48.58580560194813 - type: nauc_mrr_at_10_max value: 40.15075599433366 - type: nauc_mrr_at_10_std value: -7.267928771548688 - type: nauc_mrr_at_1_diff1 value: 51.47535097164919 - type: nauc_mrr_at_1_max value: 38.23579750430856 - type: nauc_mrr_at_1_std value: -9.187785187137633 - type: nauc_mrr_at_20_diff1 value: 48.58688378336222 - type: nauc_mrr_at_20_max value: 40.13408744088299 - type: nauc_mrr_at_20_std value: -7.283132775160146 - type: nauc_mrr_at_3_diff1 value: 48.66833005454742 - type: nauc_mrr_at_3_max value: 40.07987333638038 - type: nauc_mrr_at_3_std value: -7.738819947521418 - type: nauc_mrr_at_5_diff1 value: 48.76536305941537 - type: nauc_mrr_at_5_max value: 40.381929739522185 - type: nauc_mrr_at_5_std value: -7.592858318378928 - type: nauc_ndcg_at_1000_diff1 value: 41.67304442004693 - type: nauc_ndcg_at_1000_max value: 35.84126926253235 - type: nauc_ndcg_at_1000_std value: -4.78971011604655 - type: nauc_ndcg_at_100_diff1 value: 41.16918850185783 - type: nauc_ndcg_at_100_max value: 36.082461962326505 - type: nauc_ndcg_at_100_std value: -4.092442251697269 - type: nauc_ndcg_at_10_diff1 value: 40.300065598615205 - type: nauc_ndcg_at_10_max value: 34.87866296788365 - type: nauc_ndcg_at_10_std value: -5.866529277842453 - type: nauc_ndcg_at_1_diff1 value: 51.74612915209495 - type: nauc_ndcg_at_1_max value: 37.71907067970078 - type: nauc_ndcg_at_1_std value: -9.064124266098696 - type: nauc_ndcg_at_20_diff1 value: 40.493949850214584 - type: nauc_ndcg_at_20_max value: 35.69331503650286 - type: nauc_ndcg_at_20_std value: -4.995310342975443 - type: nauc_ndcg_at_3_diff1 value: 41.269443212112364 - type: nauc_ndcg_at_3_max value: 32.572844460953334 - type: nauc_ndcg_at_3_std value: -9.063015396458791 - type: nauc_ndcg_at_5_diff1 value: 41.37039652522888 - type: nauc_ndcg_at_5_max value: 34.67416011393571 - type: nauc_ndcg_at_5_std value: -7.106845569862319 - type: nauc_precision_at_1000_diff1 value: -9.571769961090155 - type: nauc_precision_at_1000_max value: 5.574782583417188 - type: nauc_precision_at_1000_std value: 7.28333847923847 - type: nauc_precision_at_100_diff1 value: -7.7405012003383735 - type: nauc_precision_at_100_max value: 9.67745355070353 - type: nauc_precision_at_100_std value: 9.327890294080992 - type: nauc_precision_at_10_diff1 value: -1.006879647532931 - type: nauc_precision_at_10_max value: 15.899825481231064 - type: nauc_precision_at_10_std value: 4.2284084852153105 - type: nauc_precision_at_1_diff1 value: 51.74612915209495 - type: nauc_precision_at_1_max value: 37.71907067970078 - type: nauc_precision_at_1_std value: -9.064124266098696 - type: nauc_precision_at_20_diff1 value: -4.982301544401409 - type: nauc_precision_at_20_max value: 13.241674471380568 - type: nauc_precision_at_20_std value: 7.052280133821539 - type: nauc_precision_at_3_diff1 value: 15.442614376387374 - type: nauc_precision_at_3_max value: 25.12695418083 - type: nauc_precision_at_3_std value: -3.1150066697920638 - type: nauc_precision_at_5_diff1 value: 8.381026072692444 - type: nauc_precision_at_5_max value: 22.839056540604822 - type: nauc_precision_at_5_std value: 1.5126905486524331 - type: nauc_recall_at_1000_diff1 value: -0.8869709920433502 - type: nauc_recall_at_1000_max value: 45.092324433377264 - type: nauc_recall_at_1000_std value: 62.21264093315108 - type: nauc_recall_at_100_diff1 value: 16.036715011075714 - type: nauc_recall_at_100_max value: 39.79963411771158 - type: nauc_recall_at_100_std value: 28.41850069503361 - type: nauc_recall_at_10_diff1 value: 25.189622794479998 - type: nauc_recall_at_10_max value: 30.82355277039427 - type: nauc_recall_at_10_std value: 0.0964544736531047 - type: nauc_recall_at_1_diff1 value: 44.36306892906905 - type: nauc_recall_at_1_max value: 25.61348630699028 - type: nauc_recall_at_1_std value: -8.713074613333902 - type: nauc_recall_at_20_diff1 value: 20.43424504746087 - type: nauc_recall_at_20_max value: 33.96010554649377 - type: nauc_recall_at_20_std value: 6.900984030301936 - type: nauc_recall_at_3_diff1 value: 33.86531858793492 - type: nauc_recall_at_3_max value: 27.725692256711188 - type: nauc_recall_at_3_std value: -8.533124289305709 - type: nauc_recall_at_5_diff1 value: 32.006964557701686 - type: nauc_recall_at_5_max value: 31.493370659289806 - type: nauc_recall_at_5_std value: -4.8639793547793255 - type: ndcg_at_1 value: 60.461 - type: ndcg_at_10 value: 68.529 - type: ndcg_at_100 value: 71.664 - type: ndcg_at_1000 value: 72.396 - type: ndcg_at_20 value: 70.344 - type: ndcg_at_3 value: 61.550000000000004 - type: ndcg_at_5 value: 64.948 - type: precision_at_1 value: 60.461 - type: precision_at_10 value: 13.28 - type: precision_at_100 value: 1.555 - type: precision_at_1000 value: 0.164 - type: precision_at_20 value: 7.216 - type: precision_at_3 value: 33.077 - type: precision_at_5 value: 23.014000000000003 - type: recall_at_1 value: 42.529 - type: recall_at_10 value: 81.169 - type: recall_at_100 value: 93.154 - type: recall_at_1000 value: 98.18299999999999 - type: recall_at_20 value: 87.132 - type: recall_at_3 value: 63.905 - type: recall_at_5 value: 71.967 task: type: Retrieval - dataset: config: default name: MTEB RuReviewsClassification (default) revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a split: test type: ai-forever/ru-reviews-classification metrics: - type: accuracy value: 61.17675781250001 - type: f1 value: 60.354535346041374 - type: f1_weighted value: 60.35437313166116 - type: main_score value: 61.17675781250001 task: type: Classification - dataset: config: default name: MTEB RuSTSBenchmarkSTS (default) revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018 split: test type: ai-forever/ru-stsbenchmark-sts metrics: - type: cosine_pearson value: 78.1301041727274 - type: cosine_spearman value: 78.08238025421747 - type: euclidean_pearson value: 77.35224254583635 - type: euclidean_spearman value: 78.08235336582496 - type: main_score value: 78.08238025421747 - type: manhattan_pearson value: 77.24138550052075 - type: manhattan_spearman value: 77.98199107904142 - type: pearson value: 78.1301041727274 - type: spearman value: 78.08238025421747 task: type: STS - dataset: config: default name: MTEB RuSciBenchGRNTIClassification (default) revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 split: test type: ai-forever/ru-scibench-grnti-classification metrics: - type: accuracy value: 54.990234375 - type: f1 value: 53.537019057131374 - type: f1_weighted value: 53.552745354520766 - type: main_score value: 54.990234375 task: type: Classification - dataset: config: default name: MTEB RuSciBenchGRNTIClusteringP2P (default) revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 split: test type: ai-forever/ru-scibench-grnti-classification metrics: - type: main_score value: 50.775228895355106 - type: v_measure value: 50.775228895355106 - type: v_measure_std value: 0.9533571150165796 task: type: Clustering - dataset: config: default name: MTEB RuSciBenchOECDClassification (default) revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 split: test type: ai-forever/ru-scibench-oecd-classification metrics: - type: accuracy value: 41.71875 - type: f1 value: 39.289100975858304 - type: f1_weighted value: 39.29257829217775 - type: main_score value: 41.71875 task: type: Classification - dataset: config: default name: MTEB RuSciBenchOECDClusteringP2P (default) revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 split: test type: ai-forever/ru-scibench-oecd-classification metrics: - type: main_score value: 45.10904808834516 - type: v_measure value: 45.10904808834516 - type: v_measure_std value: 1.0572643410157534 task: type: Clustering - dataset: config: rus_Cyrl name: MTEB SIB200Classification (rus_Cyrl) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: test type: mteb/sib200 metrics: - type: accuracy value: 66.36363636363637 - type: f1 value: 64.6940336621617 - type: f1_weighted value: 66.43317771876966 - type: main_score value: 66.36363636363637 task: type: Classification - dataset: config: rus_Cyrl name: MTEB SIB200ClusteringS2S (rus_Cyrl) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: test type: mteb/sib200 metrics: - type: main_score value: 33.99178497314711 - type: v_measure value: 33.99178497314711 - type: v_measure_std value: 4.036337464043786 task: type: Clustering - dataset: config: ru name: MTEB STS22.v2 (ru) revision: d31f33a128469b20e357535c39b82fb3c3f6f2bd split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_pearson value: 50.724322379215934 - type: cosine_spearman value: 59.90449732164651 - type: euclidean_pearson value: 50.227545226784024 - type: euclidean_spearman value: 59.898906527601085 - type: main_score value: 59.90449732164651 - type: manhattan_pearson value: 50.21762139819405 - type: manhattan_spearman value: 59.761039813759 - type: pearson value: 50.724322379215934 - type: spearman value: 59.90449732164651 task: type: STS - dataset: config: ru name: MTEB STSBenchmarkMultilingualSTS (ru) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: - type: cosine_pearson value: 78.43928769569945 - type: cosine_spearman value: 78.23961768018884 - type: euclidean_pearson value: 77.4718694027985 - type: euclidean_spearman value: 78.23887044760475 - type: main_score value: 78.23961768018884 - type: manhattan_pearson value: 77.34517128089547 - type: manhattan_spearman value: 78.1146477340426 - type: pearson value: 78.43928769569945 - type: spearman value: 78.23961768018884 task: type: STS - dataset: config: default name: MTEB SensitiveTopicsClassification (default) revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2 split: test type: ai-forever/sensitive-topics-classification metrics: - type: accuracy value: 22.8125 - type: f1 value: 17.31969589593409 - type: lrap value: 33.82412380642287 - type: main_score value: 22.8125 task: type: MultilabelClassification - dataset: config: default name: MTEB TERRa (default) revision: 7b58f24536063837d644aab9a023c62199b2a612 split: dev type: ai-forever/terra-pairclassification metrics: - type: cosine_accuracy value: 57.32899022801303 - type: cosine_accuracy_threshold value: 85.32201051712036 - type: cosine_ap value: 55.14264553720072 - type: cosine_f1 value: 66.83544303797468 - type: cosine_f1_threshold value: 85.32201051712036 - type: cosine_precision value: 54.54545454545454 - type: cosine_recall value: 86.27450980392157 - type: dot_accuracy value: 57.32899022801303 - type: dot_accuracy_threshold value: 85.32201051712036 - type: dot_ap value: 55.14264553720072 - type: dot_f1 value: 66.83544303797468 - type: dot_f1_threshold value: 85.32201051712036 - type: dot_precision value: 54.54545454545454 - type: dot_recall value: 86.27450980392157 - type: euclidean_accuracy value: 57.32899022801303 - type: euclidean_accuracy_threshold value: 54.18117046356201 - type: euclidean_ap value: 55.14264553720072 - type: euclidean_f1 value: 66.83544303797468 - type: euclidean_f1_threshold value: 54.18117046356201 - type: euclidean_precision value: 54.54545454545454 - type: euclidean_recall value: 86.27450980392157 - type: main_score value: 55.14264553720072 - type: manhattan_accuracy value: 57.32899022801303 - type: manhattan_accuracy_threshold value: 828.8480758666992 - type: manhattan_ap value: 55.077974053622555 - type: manhattan_f1 value: 66.82352941176471 - type: manhattan_f1_threshold value: 885.6784820556641 - type: manhattan_precision value: 52.20588235294118 - type: manhattan_recall value: 92.81045751633987 - type: max_ap value: 55.14264553720072 - type: max_f1 value: 66.83544303797468 - type: max_precision value: 54.54545454545454 - type: max_recall value: 92.81045751633987 - type: similarity_accuracy value: 57.32899022801303 - type: similarity_accuracy_threshold value: 85.32201051712036 - type: similarity_ap value: 55.14264553720072 - type: similarity_f1 value: 66.83544303797468 - type: similarity_f1_threshold value: 85.32201051712036 - type: similarity_precision value: 54.54545454545454 - type: similarity_recall value: 86.27450980392157 task: type: PairClassification - dataset: config: ru name: MTEB XNLI (ru) revision: 09698e0180d87dc247ca447d3a1248b931ac0cdb split: test type: mteb/xnli metrics: - type: cosine_accuracy value: 67.6923076923077 - type: cosine_accuracy_threshold value: 87.6681923866272 - type: cosine_ap value: 73.18693800863593 - type: cosine_f1 value: 70.40641099026904 - type: cosine_f1_threshold value: 85.09706258773804 - type: cosine_precision value: 57.74647887323944 - type: cosine_recall value: 90.17595307917888 - type: dot_accuracy value: 67.6923076923077 - type: dot_accuracy_threshold value: 87.66818642616272 - type: dot_ap value: 73.18693800863593 - type: dot_f1 value: 70.40641099026904 - type: dot_f1_threshold value: 85.09706258773804 - type: dot_precision value: 57.74647887323944 - type: dot_recall value: 90.17595307917888 - type: euclidean_accuracy value: 67.6923076923077 - type: euclidean_accuracy_threshold value: 49.662476778030396 - type: euclidean_ap value: 73.18693800863593 - type: euclidean_f1 value: 70.40641099026904 - type: euclidean_f1_threshold value: 54.59475517272949 - type: euclidean_precision value: 57.74647887323944 - type: euclidean_recall value: 90.17595307917888 - type: main_score value: 73.18693800863593 - type: manhattan_accuracy value: 67.54578754578755 - type: manhattan_accuracy_threshold value: 777.1001815795898 - type: manhattan_ap value: 72.98861474758783 - type: manhattan_f1 value: 70.6842435655995 - type: manhattan_f1_threshold value: 810.3782653808594 - type: manhattan_precision value: 61.80021953896817 - type: manhattan_recall value: 82.55131964809385 - type: max_ap value: 73.18693800863593 - type: max_f1 value: 70.6842435655995 - type: max_precision value: 61.80021953896817 - type: max_recall value: 90.17595307917888 - type: similarity_accuracy value: 67.6923076923077 - type: similarity_accuracy_threshold value: 87.6681923866272 - type: similarity_ap value: 73.18693800863593 - type: similarity_f1 value: 70.40641099026904 - type: similarity_f1_threshold value: 85.09706258773804 - type: similarity_precision value: 57.74647887323944 - type: similarity_recall value: 90.17595307917888 task: type: PairClassification - dataset: config: russian name: MTEB XNLIV2 (russian) revision: 5b7d477a8c62cdd18e2fed7e015497c20b4371ad split: test type: mteb/xnli2.0-multi-pair metrics: - type: cosine_accuracy value: 68.35164835164835 - type: cosine_accuracy_threshold value: 88.48621845245361 - type: cosine_ap value: 73.10205506215699 - type: cosine_f1 value: 71.28712871287128 - type: cosine_f1_threshold value: 87.00399398803711 - type: cosine_precision value: 61.67023554603854 - type: cosine_recall value: 84.4574780058651 - type: dot_accuracy value: 68.35164835164835 - type: dot_accuracy_threshold value: 88.48622441291809 - type: dot_ap value: 73.10191110714706 - type: dot_f1 value: 71.28712871287128 - type: dot_f1_threshold value: 87.00399398803711 - type: dot_precision value: 61.67023554603854 - type: dot_recall value: 84.4574780058651 - type: euclidean_accuracy value: 68.35164835164835 - type: euclidean_accuracy_threshold value: 47.98704385757446 - type: euclidean_ap value: 73.10205506215699 - type: euclidean_f1 value: 71.28712871287128 - type: euclidean_f1_threshold value: 50.982362031936646 - type: euclidean_precision value: 61.67023554603854 - type: euclidean_recall value: 84.4574780058651 - type: main_score value: 73.10205506215699 - type: manhattan_accuracy value: 67.91208791208791 - type: manhattan_accuracy_threshold value: 746.1360931396484 - type: manhattan_ap value: 72.8954736175069 - type: manhattan_f1 value: 71.1297071129707 - type: manhattan_f1_threshold value: 808.0789566040039 - type: manhattan_precision value: 60.04036326942482 - type: manhattan_recall value: 87.2434017595308 - type: max_ap value: 73.10205506215699 - type: max_f1 value: 71.28712871287128 - type: max_precision value: 61.67023554603854 - type: max_recall value: 87.2434017595308 - type: similarity_accuracy value: 68.35164835164835 - type: similarity_accuracy_threshold value: 88.48621845245361 - type: similarity_ap value: 73.10205506215699 - type: similarity_f1 value: 71.28712871287128 - type: similarity_f1_threshold value: 87.00399398803711 - type: similarity_precision value: 61.67023554603854 - type: similarity_recall value: 84.4574780058651 task: type: PairClassification - dataset: config: ru name: MTEB XQuADRetrieval (ru) revision: 51adfef1c1287aab1d2d91b5bead9bcfb9c68583 split: validation type: google/xquad metrics: - type: main_score value: 95.705 - type: map_at_1 value: 90.802 - type: map_at_10 value: 94.427 - type: map_at_100 value: 94.451 - type: map_at_1000 value: 94.451 - type: map_at_20 value: 94.446 - type: map_at_3 value: 94.121 - type: map_at_5 value: 94.34 - type: mrr_at_1 value: 90.80168776371308 - type: mrr_at_10 value: 94.42659567343111 - type: mrr_at_100 value: 94.45099347521871 - type: mrr_at_1000 value: 94.45099347521871 - type: mrr_at_20 value: 94.44574530017569 - type: mrr_at_3 value: 94.12095639943743 - type: mrr_at_5 value: 94.34036568213786 - type: nauc_map_at_1000_diff1 value: 87.40573202946949 - type: nauc_map_at_1000_max value: 65.56220344468791 - type: nauc_map_at_1000_std value: 8.865583291735863 - type: nauc_map_at_100_diff1 value: 87.40573202946949 - type: nauc_map_at_100_max value: 65.56220344468791 - type: nauc_map_at_100_std value: 8.865583291735863 - type: nauc_map_at_10_diff1 value: 87.43657080570291 - type: nauc_map_at_10_max value: 65.71295628534446 - type: nauc_map_at_10_std value: 9.055399339099655 - type: nauc_map_at_1_diff1 value: 88.08395824560428 - type: nauc_map_at_1_max value: 62.92813192908893 - type: nauc_map_at_1_std value: 6.738987385482432 - type: nauc_map_at_20_diff1 value: 87.40979818966589 - type: nauc_map_at_20_max value: 65.59474346926105 - type: nauc_map_at_20_std value: 8.944420599300914 - type: nauc_map_at_3_diff1 value: 86.97771892161035 - type: nauc_map_at_3_max value: 66.14330030122467 - type: nauc_map_at_3_std value: 8.62516327793521 - type: nauc_map_at_5_diff1 value: 87.30273362211798 - type: nauc_map_at_5_max value: 66.1522476584607 - type: nauc_map_at_5_std value: 9.780940862679724 - type: nauc_mrr_at_1000_diff1 value: 87.40573202946949 - type: nauc_mrr_at_1000_max value: 65.56220344468791 - type: nauc_mrr_at_1000_std value: 8.865583291735863 - type: nauc_mrr_at_100_diff1 value: 87.40573202946949 - type: nauc_mrr_at_100_max value: 65.56220344468791 - type: nauc_mrr_at_100_std value: 8.865583291735863 - type: nauc_mrr_at_10_diff1 value: 87.43657080570291 - type: nauc_mrr_at_10_max value: 65.71295628534446 - type: nauc_mrr_at_10_std value: 9.055399339099655 - type: nauc_mrr_at_1_diff1 value: 88.08395824560428 - type: nauc_mrr_at_1_max value: 62.92813192908893 - type: nauc_mrr_at_1_std value: 6.738987385482432 - type: nauc_mrr_at_20_diff1 value: 87.40979818966589 - type: nauc_mrr_at_20_max value: 65.59474346926105 - type: nauc_mrr_at_20_std value: 8.944420599300914 - type: nauc_mrr_at_3_diff1 value: 86.97771892161035 - type: nauc_mrr_at_3_max value: 66.14330030122467 - type: nauc_mrr_at_3_std value: 8.62516327793521 - type: nauc_mrr_at_5_diff1 value: 87.30273362211798 - type: nauc_mrr_at_5_max value: 66.1522476584607 - type: nauc_mrr_at_5_std value: 9.780940862679724 - type: nauc_ndcg_at_1000_diff1 value: 87.37823158814116 - type: nauc_ndcg_at_1000_max value: 66.00874244792789 - type: nauc_ndcg_at_1000_std value: 9.479929342875067 - type: nauc_ndcg_at_100_diff1 value: 87.37823158814116 - type: nauc_ndcg_at_100_max value: 66.00874244792789 - type: nauc_ndcg_at_100_std value: 9.479929342875067 - type: nauc_ndcg_at_10_diff1 value: 87.54508467181488 - type: nauc_ndcg_at_10_max value: 66.88756470312894 - type: nauc_ndcg_at_10_std value: 10.812624405397022 - type: nauc_ndcg_at_1_diff1 value: 88.08395824560428 - type: nauc_ndcg_at_1_max value: 62.92813192908893 - type: nauc_ndcg_at_1_std value: 6.738987385482432 - type: nauc_ndcg_at_20_diff1 value: 87.42097894104597 - type: nauc_ndcg_at_20_max value: 66.37031898778943 - type: nauc_ndcg_at_20_std value: 10.34862538094813 - type: nauc_ndcg_at_3_diff1 value: 86.50039907157999 - type: nauc_ndcg_at_3_max value: 67.97798288917929 - type: nauc_ndcg_at_3_std value: 10.162410286746852 - type: nauc_ndcg_at_5_diff1 value: 87.13322094568531 - type: nauc_ndcg_at_5_max value: 68.08576118683821 - type: nauc_ndcg_at_5_std value: 12.639637379592855 - type: nauc_precision_at_1000_diff1 value: 100.0 - type: nauc_precision_at_1000_max value: 100.0 - type: nauc_precision_at_1000_std value: 100.0 - type: nauc_precision_at_100_diff1 value: 100.0 - type: nauc_precision_at_100_max value: 100.0 - type: nauc_precision_at_100_std value: 100.0 - type: nauc_precision_at_10_diff1 value: 93.46711505595813 - type: nauc_precision_at_10_max value: 100.0 - type: nauc_precision_at_10_std value: 65.42573557179935 - type: nauc_precision_at_1_diff1 value: 88.08395824560428 - type: nauc_precision_at_1_max value: 62.92813192908893 - type: nauc_precision_at_1_std value: 6.738987385482432 - type: nauc_precision_at_20_diff1 value: 91.28948674127133 - type: nauc_precision_at_20_max value: 100.0 - type: nauc_precision_at_20_std value: 90.74278258632364 - type: nauc_precision_at_3_diff1 value: 82.64606115071832 - type: nauc_precision_at_3_max value: 83.26201582412921 - type: nauc_precision_at_3_std value: 23.334013491433762 - type: nauc_precision_at_5_diff1 value: 85.0867539350284 - type: nauc_precision_at_5_max value: 96.57011448655484 - type: nauc_precision_at_5_std value: 56.46869543426768 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: .nan - type: nauc_recall_at_100_max value: .nan - type: nauc_recall_at_100_std value: .nan - type: nauc_recall_at_10_diff1 value: 93.46711505595623 - type: nauc_recall_at_10_max value: 100.0 - type: nauc_recall_at_10_std value: 65.42573557180279 - type: nauc_recall_at_1_diff1 value: 88.08395824560428 - type: nauc_recall_at_1_max value: 62.92813192908893 - type: nauc_recall_at_1_std value: 6.738987385482432 - type: nauc_recall_at_20_diff1 value: 91.28948674127474 - type: nauc_recall_at_20_max value: 100.0 - type: nauc_recall_at_20_std value: 90.74278258632704 - type: nauc_recall_at_3_diff1 value: 82.64606115071967 - type: nauc_recall_at_3_max value: 83.26201582413023 - type: nauc_recall_at_3_std value: 23.334013491434007 - type: nauc_recall_at_5_diff1 value: 85.08675393502854 - type: nauc_recall_at_5_max value: 96.57011448655487 - type: nauc_recall_at_5_std value: 56.46869543426658 - type: ndcg_at_1 value: 90.802 - type: ndcg_at_10 value: 95.705 - type: ndcg_at_100 value: 95.816 - type: ndcg_at_1000 value: 95.816 - type: ndcg_at_20 value: 95.771 - type: ndcg_at_3 value: 95.11699999999999 - type: ndcg_at_5 value: 95.506 - type: precision_at_1 value: 90.802 - type: precision_at_10 value: 9.949 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.987 - type: precision_at_3 value: 32.658 - type: precision_at_5 value: 19.781000000000002 - type: recall_at_1 value: 90.802 - type: recall_at_10 value: 99.494 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 99.747 - type: recall_at_3 value: 97.975 - type: recall_at_5 value: 98.90299999999999 task: type: Retrieval tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers --- ## Multilingual-E5-small [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small') model = AutoModel.from_pretrained('intfloat/multilingual-e5-small') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-small') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
nvidia/segformer-b1-finetuned-ade-512-512
nvidia
"2022-08-06T10:08:05Z"
820,379
0
transformers
[ "transformers", "pytorch", "tf", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-03-02T23:29:05Z"
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b1-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cross-encoder/ms-marco-TinyBERT-L-2-v2
cross-encoder
"2021-08-05T08:39:45Z"
816,431
16
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
EmergentMethods/gliner_medium_news-v2.1
EmergentMethods
"2024-06-18T08:33:15Z"
813,588
68
gliner
[ "gliner", "pytorch", "token-classification", "en", "dataset:EmergentMethods/AskNews-NER-v0", "arxiv:2406.10258", "license:apache-2.0", "region:us" ]
token-classification
"2024-04-17T09:05:00Z"
--- license: apache-2.0 datasets: - EmergentMethods/AskNews-NER-v0 tags: - gliner language: - en pipeline_tag: token-classification --- # Model Card for gliner_medium_news-v2.1 This model is a fine-tune of [GLiNER](https://huggingface.co/urchade/gliner_medium-v2.1) aimed at improving accuracy across a broad range of topics, especially with respect to long-context news entity extraction. As shown in the table below, these fine-tunes improved upon the base GLiNER model zero-shot accuracy by up to 7.5% across 18 benchmark datasets. ![results table](assets/zero-shot_18_table.png) The underlying dataset, [AskNews-NER-v0](https://huggingface.co/datasets/EmergentMethods/AskNews-NER-v0) was engineered with the objective of diversifying global perspectives by enforcing country/language/topic/temporal diversity. All data used to fine-tune this model was synthetically generated. WizardLM 13B v1.2 was used for translation/summarization of open-web news articles, while Llama3 70b instruct was used for entity extraction. Both the diversification and fine-tuning methods are presented in a our paper on [ArXiv](https://arxiv.org/abs/2406.10258). # Usage ```python from gliner import GLiNER model = GLiNER.from_pretrained("EmergentMethods/gliner_medium_news-v2.1") text = """ The Chihuahua State Public Security Secretariat (SSPE) arrested 35-year-old Salomón C. T. in Ciudad Juárez, found in possession of a stolen vehicle, a white GMC Yukon, which was reported stolen in the city's streets. The arrest was made by intelligence and police analysis personnel during an investigation in the border city. The arrest is related to a previous detention on February 6, which involved armed men in a private vehicle. The detainee and the vehicle were turned over to the Chihuahua State Attorney General's Office for further investigation into the case. """ labels = ["person", "location", "date", "event", "facility", "vehicle", "number", "organization"] entities = model.predict_entities(text, labels) for entity in entities: print(entity["text"], "=>", entity["label"]) ``` Output: ``` Chihuahua State Public Security Secretariat => organization SSPE => organization 35-year-old => number Salomón C. T. => person Ciudad Juárez => location GMC Yukon => vehicle February 6 => date Chihuahua State Attorney General's Office => organization ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The synthetic data underlying this news fine-tune was pulled from the [AskNews API](https://docs.asknews.app). We enforced diveristy across country/language/topic/time. Countries: ![country distribution](assets/countries_distribution.png) Entity types: ![entities](assets/entity-types_limited.png) Topics: ![topics](assets/topics_fig_connected.png) - **Developed by:** [Emergent Methods](https://emergentmethods.ai/) - **Funded by:** [Emergent Methods](https://emergentmethods.ai/) - **Shared by:** [Emergent Methods](https://emergentmethods.ai/) - **Model type:** microsoft/deberta - **Language(s) (NLP):** English (en) (English texts and translations from Spanish (es), Portuguese (pt), German (de), Russian (ru), French (fr), Arabic (ar), Italian (it), Ukrainian (uk), Norwegian (no), Swedish (sv), Danish (da)). - **License:** Apache 2.0 - **Finetuned from model:** [GLiNER](https://huggingface.co/urchade/gliner_medium-v2.1) ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** To be added - **Paper:** To be added - **Demo:** To be added ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> As the name suggests, this model is aimed at generalist entity extraction. Although we used news to fine-tune this model, it improved accuracy across 18 benchmark datasets by up to 7.5%. This means that the broad and diversified underlying dataset has helped it to recognize and extract more entity types. This model is shockingly compact, and can be used for high-throughput production usecases. This is another reason we have licensed this as Apache 2.0. Currently, [AskNews](https://asknews.app) is using this fine-tune for entity extraction in their system. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Llama3 will be present in the present dataaset since Llama3 was used to extract entities from the summaries. ![countries distribution](figures/topics_fig_connected.png) ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The training dataset is [AskNews-NER-v0](https://huggingface.co/datasets/EmergentMethods/AskNews-NER-v0). Other training details can be found in the [companion paper](https://linktoarxiv.org). ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 1xA4500 - **Hours used:** 10 - **Carbon Emitted:** 0.6 kg (According to [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute)) ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** To be added **APA:** To be added ## Model Authors Elin Törnquist, Emergent Methods elin at emergentmethods.ai Robert Caulk, Emergent Methods rob at emergentmethods.ai ## Model Contact Elin Törnquist, Emergent Methods elin at emergentmethods.ai Robert Caulk, Emergent Methods rob at emergentmethods.ai
cointegrated/rubert-tiny2
cointegrated
"2023-10-14T21:23:32Z"
813,085
103
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "pretraining", "russian", "fill-mask", "embeddings", "masked-lm", "tiny", "feature-extraction", "sentence-similarity", "transformers", "ru", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - ru pipeline_tag: sentence-similarity tags: - russian - fill-mask - pretraining - embeddings - masked-lm - tiny - feature-extraction - sentence-similarity - sentence-transformers - transformers license: mit widget: - text: Миниатюрная модель для [MASK] разных задач. --- This is an updated version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny): a small Russian BERT-based encoder with high-quality sentence embeddings. This [post in Russian](https://habr.com/ru/post/669674/) gives more details. The differences from the previous version include: - a larger vocabulary: 83828 tokens instead of 29564; - larger supported sequences: 2048 instead of 512; - sentence embeddings approximate LaBSE closer than before; - meaningful segment embeddings (tuned on the NLI task) - the model is focused only on Russian. The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task. Sentence embeddings can be produced as follows: ```python # pip install transformers sentencepiece import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny2") model = AutoModel.from_pretrained("cointegrated/rubert-tiny2") # model.cuda() # uncomment it if you have a GPU def embed_bert_cls(text, model, tokenizer): t = tokenizer(text, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**{k: v.to(model.device) for k, v in t.items()}) embeddings = model_output.last_hidden_state[:, 0, :] embeddings = torch.nn.functional.normalize(embeddings) return embeddings[0].cpu().numpy() print(embed_bert_cls('привет мир', model, tokenizer).shape) # (312,) ``` Alternatively, you can use the model with `sentence_transformers`: ```Python from sentence_transformers import SentenceTransformer model = SentenceTransformer('cointegrated/rubert-tiny2') sentences = ["привет мир", "hello world", "здравствуй вселенная"] embeddings = model.encode(sentences) print(embeddings) ```
microsoft/Phi-3.5-vision-instruct
microsoft
"2024-09-26T22:42:52Z"
804,068
560
transformers
[ "transformers", "safetensors", "phi3_v", "text-generation", "nlp", "code", "vision", "image-text-to-text", "conversational", "custom_code", "multilingual", "arxiv:2404.14219", "license:mit", "autotrain_compatible", "region:us" ]
image-text-to-text
"2024-08-16T23:48:22Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3.5-vision-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: image-text-to-text tags: - nlp - code - vision inference: parameters: temperature: 0.7 widget: - messages: - role: user content: <|image_1|>Can you describe what you see in the image? library_name: transformers --- ## Model Summary Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. 🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br> 📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br> 📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br> 👩‍🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br> 🖥️ [Try It](https://aka.ms/try-phi3.5vision) <br> **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) ## Intended Uses ### Primary Use Cases The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications with visual and text input capabilities which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) General image understanding 4) Optical character recognition 5) Chart and table understanding 6) Multiple image comparison 7) Multi-image or video clip summarization Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. ### Use Case Considerations Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. ***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.*** ## Release Notes In this release, the model enables multi-frame image understanding and reasoning which is based on valuable customer feedback. The hero example multi-frame capabilities include detailed image comparison, multi-image summarization/storytelling and video summarization, which have broad applications in Office scenarios. We also observed performance improvement on most single image benchmarks, e.g., boost MMMU performance from 40.2 to 43.0, MMBench performance from 80.5 to 81.9, document understanding benchmark TextVQA from 70.9 to 72.0. We believe most use cases will benefit from this release, but we encourage users to test the new model in their AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family and continue to welcome all the feedback from the community. Below are the comparison results on existing multi-image benchmarks. On average, our model outperforms competitor models on the same size and competitive with much bigger models on multi-frame capabilities and video summarization. **BLINK**: a benchmark with 14 visual tasks that humans can solve very quickly but are still hard for current multimodal LLMs. | Benchmark | Phi-3.5-vision-instruct | LlaVA-Interleave-Qwen-7B | InternVL-2-4B | InternVL-2-8B | Gemini-1.5-Flash | GPT-4o-mini | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o | |--|--|--|--|--|--|--|--|--|--| | Art Style | 87.2 | 62.4 | 55.6 | 52.1 | 64.1 | 70.1 | 59.8 | 70.9 | 73.3 | | Counting | 54.2 | 56.7 | 54.2 | 66.7 | 51.7 | 55.0 | 59.2 | 65.0 | 65.0 | | Forensic Detection | 92.4 | 31.1 | 40.9 | 34.1 | 54.5 | 38.6 | 67.4 | 60.6 | 75.8 | | Functional Correspondence | 29.2 | 34.6 | 24.6 | 24.6 | 33.1 | 26.9 | 33.8 | 31.5 | 43.8 | | IQ Test | 25.3 | 26.7 | 26.0 | 30.7 | 25.3 | 29.3 | 26.0 | 34.0 | 19.3 | | Jigsaw | 68.0 | 86.0 | 55.3 | 52.7 | 71.3 | 72.7 | 57.3 | 68.0 | 67.3 | | Multi-View Reasoning | 54.1 | 44.4 | 48.9 | 42.9 | 48.9 | 48.1 | 55.6 | 49.6 | 46.6 | | Object Localization | 49.2 | 54.9 | 53.3 | 54.1 | 44.3 | 57.4 | 62.3 | 65.6 | 68.0 | | Relative Depth | 69.4 | 77.4 | 63.7 | 67.7 | 57.3 | 58.1 | 71.8 | 76.6 | 71.0 | | Relative Reflectance | 37.3 | 34.3 | 32.8 | 38.8 | 32.8 | 27.6 | 36.6 | 38.8 | 40.3 | | Semantic Correspondence | 36.7 | 31.7 | 31.7 | 22.3 | 32.4 | 31.7 | 45.3 | 48.9 | 54.0 | | Spatial Relation | 65.7 | 75.5 | 78.3 | 78.3 | 55.9 | 81.1 | 60.1 | 79.0 | 84.6 | | Visual Correspondence | 53.5 | 40.7 | 34.9 | 33.1 | 29.7 | 52.9 | 72.1 | 81.4 | 86.0 | | Visual Similarity | 83.0 | 91.9 | 48.1 | 45.2 | 47.4 | 77.8 | 84.4 | 81.5 | 88.1 | | **Overall** | **57.0** | **53.1** | **45.9** | **45.4** | **45.8** | **51.9** | **56.5** | **61.0** | **63.2** | **Video-MME**: comprehensively assess the capabilities of MLLMs in processing video data, covering a wide range of visual domains, temporal durations, and data modalities. | Benchmark | Phi-3.5-vision-instruct | LlaVA-Interleave-Qwen-7B | InternVL-2-4B | InternVL-2-8B | Gemini-1.5-Flash | GPT-4o-mini | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o | |--|--|--|--|--|--|--|--|--|--| | short (<2min) | 60.8 | 62.3 | 60.7 | 61.7 | 72.2 | 70.1 | 66.3 | 73.3 | 77.7 | | medium (4-15min) | 47.7 | 47.1 | 46.4 | 49.6 | 62.7 | 59.6 | 54.7 | 61.2 | 68.0 | | long (30-60min) | 43.8 | 41.2 | 42.6 | 46.6 | 52.1 | 53.9 | 46.6 | 53.2 | 59.6 | | **Overall** | **50.8** | **50.2** | **49.9** | **52.6** | **62.3** | **61.2** | **55.9** | **62.6** | **68.4** | ## Usage ### Requirements The current `transformers` version can be verified with: `pip list | grep transformers`. Examples of required packages: ``` flash_attn==2.5.8 numpy==1.24.4 Pillow==10.3.0 Requests==2.31.0 torch==2.3.0 torchvision==0.18.0 transformers==4.43.0 accelerate==0.30.0 ``` Phi-3.5-vision-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5vision). ### Input Formats Given the nature of the training data, the Phi-3.5-vision model is best suited for prompts using the chat format as follows: Single image: ``` <|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n ``` Multi-turn conversations: ``` <|user|>\n<|image_1|>\n{prompt_1}<|end|>\n<|assistant|>\n{response_1}<|end|>\n<|user|>\n{prompt_2}<|end|>\n<|assistant|>\n ``` For multi-image usage, add multiple image placeholders in the front of the prompts. <|image_{}|> index should start from 1. One example of prompt is shown as follows: ``` <|user|>\n<|image_1|>\n<|image_2|>\n<|image_3|>\n<|image_4|>\n{prompt}<|end|>\n<|assistant|>\n ``` ### Loading the model locally After obtaining the Phi-3.5-vision-instruct model checkpoints, users can use this sample code for inference. ```python from PIL import Image import requests from transformers import AutoModelForCausalLM from transformers import AutoProcessor model_id = "microsoft/Phi-3.5-vision-instruct" # Note: set _attn_implementation='eager' if you don't have flash_attn installed model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto", _attn_implementation='flash_attention_2' ) # for best performance, use num_crops=4 for multi-frame, num_crops=16 for single-frame. processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True, num_crops=4 ) images = [] placeholder = "" # Note: if OOM, you might consider reduce number of frames in this example. for i in range(1,20): url = f"https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-{i}-2048.jpg" images.append(Image.open(requests.get(url, stream=True).raw)) placeholder += f"<|image_{i}|>\n" messages = [ {"role": "user", "content": placeholder+"Summarize the deck of slides."}, ] prompt = processor.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = processor(prompt, images, return_tensors="pt").to("cuda:0") generation_args = { "max_new_tokens": 1000, "temperature": 0.0, "do_sample": False, } generate_ids = model.generate(**inputs, eos_token_id=processor.tokenizer.eos_token_id, **generation_args ) # remove input tokens generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:] response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(response) ``` Notes: + to achieve best performances we suggest to set _num_crops=4_ for multi-frame and _num_crops=16_ for single-frame. + to turn off flash_attention users can set __attn_implementation='eager'_ ## Responsible AI Considerations Like other models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: * Quality of Service: The Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. * Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. * Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. * Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. * Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: * Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. * High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. * Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). * Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. * Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. * Identification of individuals: models with vision capabilities may have the potential to uniquely identify individuals in images. Safety post-training steers the model to refuse such requests, but developers should consider and implement, as appropriate, additional mitigations or user consent flows as required in their respective jurisdiction, (e.g., building measures to blur faces in image inputs before processing). ## Training ### Models **Architecture:** Phi-3.5-vision has 4.2B parameters and contains image encoder, connector, projector, and Phi-3 Mini language model.<br> **Inputs:** Text and Image. It’s best suited for prompts using the chat format.<br> **Context length:** 128K tokens<br> **GPUs:** 256 A100-80G<br> **Training time:** 6 days<br> **Training data:** 500B tokens (vision tokens + text tokens)<br> **Outputs:** Generated text in response to the input<br> **Dates:** Trained between July and August 2024<br> **Status:** This is a static model trained on an offline text dataset with cutoff date March 15, 2024. Future versions of the tuned models may be released as we improve models.<br> **Release date:** August 2024<br> ### Data Overview Our training data includes a wide variety of sources, and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data and code; 2) selected high-quality image-text interleave data; 3) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.), newly created image data, e.g., chart/table/diagram/slides, newly created multi-image and video data, e.g., short video clips/pair of two similar images; 4) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. The data collection process involved sourcing information from publicly available documents, with a meticulous approach to filtering out undesirable documents and images. To safeguard privacy, we carefully filtered various image and text data sources to remove or scrub any potentially personal data from the training data. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219). ### How to finetune? We recommend user to take a look at the [Phi-3 CookBook finetuning recipe for Vision](https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.Fine-tuning/FineTuning_Vision.md) ## Benchmarks To understand the capabilities, we compare Phi-3.5-vision with a set of models over a variety of zero-shot benchmarks using our internal benchmark platform. At the high-level overview of the model quality on representative benchmarks: | Category | Benchmark | Phi-3.5-vision-instruct | Intern-VL-2-4B | Intern-VL-2-8B | Gemini-1.5-Flash | GPT-4o-mini 2024-7-18 | Claude-3.5-Sonnet | Gemini-1.5-Pro | GPT-4o 2024-5-13 | |--|--|--|--|--|--|--|--|--|--| | Popular aggregated benchmark | MMMU (val) | 43.0 | 44.22 | 46.33 | 49.33 | 52.1 | 52.67 | 54.11 | 61.78 | | | MMBench (dev-en) | 81.9 | 83.4 | 87.0 | 85.7 | 83.8 | 82.3 | 87.9 | 88.4 | | Visual scientific knowledge reasoning | ScienceQA (img-test) | 91.3 | 94.9 | 95.9 | 84.5 | 84.0 | 73.8 | 86.0 | 88.5 | | Visual math reasoning | MathVista (testmini) | 43.9 | 53.7 | 51.1 | 55.3 | 38.8 | 54.0 | 57.4 | 54.4 | | | InterGPS (test) | 36.3 | 45.6 | 53.2 | 39.4 | 39.9 | 45.6 | 58.2 | 46.9 | | Chart reasoning | AI2D (test) | 78.1 | 77.3 | 81.4 | 78.4 | 75.2 | 68.9 | 75.6 | 82.8 | | | ChartQA (test) | 81.8 | 78.8 | 80.4 | 57.6 | 54.5 | 73.2 | 68.2 | 64.0 | | Document Intelligence | TextVQA (val) | 72.0 | 66.2 | 68.8 | 67.4 | 70.9 | 70.5 | 64.5 | 75.6 | | Object visual presence verification | POPE (test) | 86.1 | 83.3 | 84.2 | 86.1 | 83.6 | 76.6 | 89.3 | 87.0 | ## Safety Evaluation and Red-Teaming **Approach** The Phi-3 family of models has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated datasets. The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and RLHF (Reinforcement Learning from Human Feedback) approaches by utilizing human-labeled and synthetic English-language datasets, including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted to multiple safety categories. **Safety Evaluation** We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and safety evaluation benchmark datasets to evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple risk categories. Several approaches were used to compensate for the limitations of one approach alone. Please refer to the [technical report](https://arxiv.org/pdf/2404.14219) for more details of our safety alignment. ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3.5-Mini-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 ## License The model is licensed under the [MIT license](./LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
openai/whisper-medium
openai
"2024-02-29T10:57:42Z"
800,834
210
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "arxiv:2212.04356", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-09-26T06:52:52Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - no - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: whisper-medium results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 2.9 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 5.9 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hi split: test args: language: hi metrics: - name: Test WER type: wer value: 53.87 pipeline_tag: automatic-speech-recognition license: apache-2.0 --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card. ## Model details Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The models were trained on either English-only data or multilingual data. The English-only models were trained on the task of speech recognition. The multilingual models were trained on both speech recognition and speech translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio. For speech translation, the model predicts transcriptions to a *different* language to the audio. Whisper checkpoints come in five configurations of varying model sizes. The smallest four are trained on either English-only or multilingual data. The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The checkpoints are summarised in the following table with links to the models on the Hub: | Size | Parameters | English-only | Multilingual | |----------|------------|------------------------------------------------------|-----------------------------------------------------| | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition. ## Transcription ### English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium") >>> model.config.forced_decoder_ids = None >>> # load dummy dataset and read audio files >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`. ### French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ``` ## Translation Setting the task to "translate" forces the Whisper model to perform speech translation. ### French to English ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>> # load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>> # generate token ids >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids) >>> # decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' A very interesting work, we will finally be given on this subject.'] ``` ## Evaluation This code snippet shows how to evaluate Whisper Medium on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 2.900409225488902 ``` ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-medium", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ## Fine-Tuning The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However, its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step guide to fine-tuning the Whisper model with as little as 5 hours of labelled data. ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
naver/splade-cocondenser-ensembledistil
naver
"2022-05-11T08:05:37Z"
800,749
36
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "splade", "query-expansion", "document-expansion", "bag-of-words", "passage-retrieval", "knowledge-distillation", "en", "dataset:ms_marco", "arxiv:2205.04733", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-05-09T13:18:41Z"
--- license: cc-by-nc-sa-4.0 language: "en" tags: - splade - query-expansion - document-expansion - bag-of-words - passage-retrieval - knowledge-distillation datasets: - ms_marco --- ## SPLADE CoCondenser EnsembleDistil SPLADE model for passage retrieval. For additional details, please visit: * paper: https://arxiv.org/abs/2205.04733 * code: https://github.com/naver/splade | | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | | --- | --- | --- | | `splade-cocondenser-ensembledistil` | 38.3 | 98.3 | ## Citation If you use our checkpoint, please cite our work: ``` @misc{https://doi.org/10.48550/arxiv.2205.04733, doi = {10.48550/ARXIV.2205.04733}, url = {https://arxiv.org/abs/2205.04733}, author = {Formal, Thibault and Lassance, Carlos and Piwowarski, Benjamin and Clinchant, Stéphane}, keywords = {Information Retrieval (cs.IR), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
facebook/contriever
facebook
"2022-01-19T17:23:28Z"
799,672
58
transformers
[ "transformers", "pytorch", "bert", "arxiv:2112.09118", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever') model = AutoModel.from_pretrained('facebook/contriever') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
csebuetnlp/banglat5_banglaparaphrase
csebuetnlp
"2022-11-05T17:14:38Z"
799,456
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "bn", "arxiv:2210.05109", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-10-15T04:19:58Z"
--- language: - bn licenses: - cc-by-nc-sa-4.0 --- # banglat5_banglaparaphrase This repository contains the pretrained checkpoint of the model **BanglaT5** finetuned on [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) dataset. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve competitive results on the dataset. For finetuning and inference, refer to the scripts in the official GitHub repository of [BanglaNLG](https://github.com/csebuetnlp/BanglaNLG). **Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below: ## Using this model in `transformers` ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5_banglaparaphrase") tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5_banglaparaphrase", use_fast=False) input_sentence = "" input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids generated_tokens = model.generate(input_ids) decoded_tokens = tokenizer.batch_decode(generated_tokens)[0] print(decoded_tokens) ``` ## Benchmarks * Supervised fine-tuning | Test Set | Model | sacreBLEU | ROUGE-L | PINC | BERTScore | BERT-iBLEU | | -------- | ----- | --------- | ------- | ---- | --------- | ---------- | | [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) | [BanglaT5](https://huggingface.co/csebuetnlp/banglat5)<br>[IndicBART](https://huggingface.co/ai4bharat/IndicBART)<br>[IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS)| 32.8<br>5.60<br>4.90 | 63.58<br>35.61<br>33.66 | 74.40<br>80.26<br>82.10 | 94.80<br>91.50<br>91.10 | 92.18<br>91.16<br>90.95 | | [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) |BanglaT5<br>IndicBART<br>IndicBARTSS| 11.0<br>12.0<br>10.7| 19.99<br>21.58<br>20.59| 74.50<br>76.83<br>77.60| 94.80<br>93.30<br>93.10 | 87.738<br>90.65<br>90.54| The dataset can be found in the link below: * **[BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase)** ## Citation If you use this model, please cite the following paper: ``` @article{akil2022banglaparaphrase, title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset}, author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat}, journal={arXiv preprint arXiv:2210.05109}, year={2022} } ```