modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
193M
likes
int64
0
6.44k
library_name
stringclasses
311 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
50 values
createdAt
unknown
card
stringlengths
1
913k
lmms-lab/LongVA-7B-DPO
lmms-lab
"2024-06-26T03:32:53Z"
63,610
7
transformers
[ "transformers", "safetensors", "llava_qwen", "text-generation", "conversational", "arxiv:2406.16852", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-21T09:00:14Z"
# LongVA <p align="center"> <img src="https://i.postimg.cc/4xFmj8wd/v-niah.png" width="800"> </p> <p align="center"> 🌐 <a href="https://lmms-lab.github.io/posts/longva/" target="_blank">Blog</a> | 📃 <a href="https://arxiv.org/abs/2406.16852" target="_blank">Paper</a> | 🤗 <a href="https://huggingface.co/collections/lmms-lab/longva-667538e09329dbc7ea498057" target="_blank">Hugging Face</a> | 🎥 <a href="https://longva-demo.lmms-lab.com/" target="_blank">Demo</a> </p> Long context capability can **zero-shot transfer** from language to vision. LongVA can process **2000** frames or over **200K** visual tokens. It achieves **state-of-the-art** performance on Video-MME among 7B models. # Usage First follow the instructions in [our repo](https://github.com/EvolvingLMMs-Lab/LongVA) to install relevant packages. ```python from longva.model.builder import load_pretrained_model from longva.mm_utils import tokenizer_image_token, process_images from longva.constants import IMAGE_TOKEN_INDEX from PIL import Image from decord import VideoReader, cpu import torch import numpy as np # fix seed torch.manual_seed(0) model_path = "lmms-lab/LongVA-7B-DPO" image_path = "local_demo/assets/lmms-eval.png" video_path = "local_demo/assets/dc_demo.mp4" max_frames_num = 16 # you can change this to several thousands so long you GPU memory can handle it :) gen_kwargs = {"do_sample": True, "temperature": 0.5, "top_p": None, "num_beams": 1, "use_cache": True, "max_new_tokens": 1024} tokenizer, model, image_processor, _ = load_pretrained_model(model_path, None, "llava_qwen", device_map="cuda:0") #image input prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nDescribe the image in details.<|im_end|>\n<|im_start|>assistant\n" input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device) image = Image.open(image_path).convert("RGB") images_tensor = process_images([image], image_processor, model.config).to(model.device, dtype=torch.float16) with torch.inference_mode(): output_ids = model.generate(input_ids, images=images_tensor, image_sizes=[image.size], modalities=["image"], **gen_kwargs) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip() print(outputs) print("-"*50) #video input prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nGive a detailed caption of the video as if I am blind.<|im_end|>\n<|im_start|>assistant\n" input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device) vr = VideoReader(video_path, ctx=cpu(0)) total_frame_num = len(vr) uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int) frame_idx = uniform_sampled_frames.tolist() frames = vr.get_batch(frame_idx).asnumpy() video_tensor = image_processor.preprocess(frames, return_tensors="pt")["pixel_values"].to(model.device, dtype=torch.float16) with torch.inference_mode(): output_ids = model.generate(input_ids, images=[video_tensor], modalities=["video"], **gen_kwargs) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip() print(outputs) ``` ## License This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models (Qwen2 license). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.
diffusers/sdxl-instructpix2pix-768
diffusers
"2023-08-30T09:42:20Z"
63,599
42
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "instruct-pix2pix", "dataset:timbrooks/instructpix2pix-clip-filtered", "arxiv:2307.01952", "arxiv:2211.09800", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "diffusers:StableDiffusionXLInstructPix2PixPipeline", "region:us" ]
text-to-image
"2023-08-23T05:24:35Z"
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - instruct-pix2pix inference: false datasets: - timbrooks/instructpix2pix-clip-filtered --- # SDXL InstructPix2Pix (768768) Instruction fine-tuning of [Stable Diffusion XL (SDXL)](https://hf.co/papers/2307.01952) à la [InstructPix2Pix](https://huggingface.co/papers/2211.09800). Some results below: **Edit instruction**: *"Turn sky into a cloudy one"* ![](https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/sdxl-instructpix2pix-release/0_0_mountain_gs%403.0_igs%401.5_steps%4050.png) **Edit instruction**: *"Make it a picasso painting"* ![](https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/sdxl-instructpix2pix-release/1_1_cyborg_gs%403.0_igs%401.5_steps%4050.png) **Edit instruction**: *"make the person older"* ![](https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/sdxl-instructpix2pix-release/image_three_2.png) ## Usage in 🧨 diffusers Make sure to install the libraries first: ```bash pip install accelerate transformers pip install git+https://github.com/huggingface/diffusers ``` ```python import torch from diffusers import StableDiffusionXLInstructPix2PixPipeline from diffusers.utils import load_image resolution = 768 image = load_image( "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" ).resize((resolution, resolution)) edit_instruction = "Turn sky into a cloudy one" pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 ).to("cuda") edited_image = pipe( prompt=edit_instruction, image=image, height=resolution, width=resolution, guidance_scale=3.0, image_guidance_scale=1.5, num_inference_steps=30, ).images[0] edited_image.save("edited_image.png") ``` To know more, refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pix2pix). 🚨 Note that this checkpoint is experimental in nature and there's a lot of room for improvements. Please use the "Discussions" tab of this repository to open issues and discuss. 🚨 ## Training We fine-tuned SDXL using the InstructPix2Pix training methodology for 15000 steps using a fixed learning rate of 5e-6 on an image resolution of 768x768. Our training scripts and other utilities can be found [here](https://github.com/sayakpaul/instructpix2pix-sdxl/tree/b9acc91d6ddf1f2aa2f9012b68216deb40e178f3) and they were built on top of our [official training script](https://huggingface.co/docs/diffusers/main/en/training/instructpix2pix). Our training logs are available on Weights and Biases [here](https://wandb.ai/sayakpaul/instruct-pix2pix-sdxl-new/runs/sw53gxmc). Refer to this link for details on all the hyperparameters. ### Training data We used this dataset: [timbrooks/instructpix2pix-clip-filtered](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered). ### Compute one 8xA100 machine ### Batch size Data parallel with a single gpu batch size of 8 for a total batch size of 32. ### Mixed precision FP16
saattrupdan/wav2vec2-xls-r-300m-ftspeech
saattrupdan
"2023-09-11T13:27:55Z"
63,548
0
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "da", "dataset:ftspeech", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:other", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-04T14:53:05Z"
--- language: - da license: other datasets: - ftspeech metrics: - wer tasks: - automatic-speech-recognition base_model: facebook/wav2vec2-xls-r-300m model-index: - name: wav2vec2-xls-r-300m-ftspeech results: - task: type: automatic-speech-recognition dataset: name: Danish Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 args: da metrics: - type: wer value: 17.91 - task: type: automatic-speech-recognition dataset: name: Alvenir ASR test dataset type: Alvenir/alvenir_asr_da_eval metrics: - type: wer value: 13.84 --- # XLS-R-300m-FTSpeech ## Model description This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [FTSpeech dataset](https://ftspeech.github.io/), being a dataset of 1,800 hours of transcribed speeches from the Danish parliament. ## Performance The model achieves the following WER scores (lower is better): | **Dataset** | **WER without LM** | **WER with 5-gram LM** | | :---: | ---: | ---: | | [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 20.48 | 17.91 | | [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 15.46 | 13.84 | ## License The use of this model needs to adhere to [this license from the Danish Parliament](https://www.ft.dk/da/aktuelt/tv-fra-folketinget/deling-og-rettigheder).
shi-labs/oneformer_ade20k_swin_large
shi-labs
"2023-01-19T14:36:03Z"
63,478
10
transformers
[ "transformers", "pytorch", "oneformer", "vision", "image-segmentation", "universal-image-segmentation", "dataset:scene_parse_150", "arxiv:2211.06220", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-11-15T19:00:56Z"
--- license: mit tags: - vision - image-segmentation - universal-image-segmentation datasets: - scene_parse_150 widget: - src: https://praeclarumjj3.github.io/files/ade20k.jpeg example_title: House - src: https://praeclarumjj3.github.io/files/demo_2.jpg example_title: Airplane - src: https://praeclarumjj3.github.io/files/coco.jpeg example_title: Person --- # OneFormer OneFormer model trained on the ADE20k dataset (large-sized version, Swin backbone). It was introduced in the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jain et al. and first released in [this repository](https://github.com/SHI-Labs/OneFormer). ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_teaser.png) ## Model description OneFormer is the first multi-task universal image segmentation framework. It needs to be trained only once with a single universal architecture, a single model, and on a single dataset, to outperform existing specialized models across semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference, all with a single model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/oneformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic, instance and panoptic segmentation. See the [model hub](https://huggingface.co/models?search=oneformer) to look for other fine-tuned versions on a different dataset. ### How to use Here is how to use this model: ```python from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation from PIL import Image import requests url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/blob/main/ade20k.jpeg" image = Image.open(requests.get(url, stream=True).raw) # Loading a single model for all three tasks processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_large") model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_ade20k_swin_large") # Semantic Segmentation semantic_inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt") semantic_outputs = model(**semantic_inputs) # pass through image_processor for postprocessing predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # Instance Segmentation instance_inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt") instance_outputs = model(**instance_inputs) # pass through image_processor for postprocessing predicted_instance_map = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"] # Panoptic Segmentation panoptic_inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt") panoptic_outputs = model(**panoptic_inputs) # pass through image_processor for postprocessing predicted_semantic_map = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]["segmentation"] ``` For more examples, please refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/oneformer). ### Citation ```bibtex @article{jain2022oneformer, title={{OneFormer: One Transformer to Rule Universal Image Segmentation}}, author={Jitesh Jain and Jiachen Li and MangTik Chiu and Ali Hassani and Nikita Orlov and Humphrey Shi}, journal={arXiv}, year={2022} } ```
flexudy/t5-base-multi-sentence-doctor
flexudy
"2020-12-11T23:33:25Z"
63,419
45
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
![avatar](sent-banner.png) # Sentence-Doctor Sentence doctor is a T5 model that attempts to correct the errors or mistakes found in sentences. Model works on English, German and French text. ## 1. Problem: Many NLP models depend on tasks like *Text Extraction Libraries, OCR, Speech to Text libraries* and **Sentence Boundary Detection** As a consequence errors caused by these tasks in your NLP pipeline can affect the quality of models in applications. Especially since models are often trained on **clean** input. ## 2. Solution: Here we provide a model that **attempts** to reconstruct sentences based on the its context (sourrounding text). The task is pretty straightforward: * `Given an "erroneous" sentence, and its context, reconstruct the "intended" sentence`. ## 3. Use Cases: * Attempt to repair noisy sentences that where extracted with OCR software or text extractors. * Attempt to repair sentence boundaries. * Example (in German): **Input: "und ich bin im**", * Prefix_Context: "Hallo! Mein Name ist John", Postfix_Context: "Januar 1990 geboren." * Output: "John und ich bin im Jahr 1990 geboren" * Possibly sentence level spelling correction -- Although this is not the intended use. * Input: "I went to church **las yesteday**" => Output: "I went to church last Sunday". ## 4. Disclaimer Note how we always emphises on the word *attempt*. The current version of the model was only trained on **150K** sentences from the tatoeba dataset: https://tatoeba.org/eng. (50K per language -- En, Fr, De). Hence, we strongly encourage you to finetune the model on your dataset. We might release a version trained on more data. ## 5. Datasets We generated synthetic data from the tatoeba dataset: https://tatoeba.org/eng. Randomly applying different transformations on words and characters based on some probabilities. The datasets are available in the data folder (where **sentence_doctor_dataset_300K** is a larger dataset with 100K sentences for each language). ## 6. Usage ### 6.1 Preprocessing * Let us assume we have the following text (Note that there are no punctuation marks in the text): ```python text = "That is my job I am a medical doctor I save lives" ``` * You decided extract the sentences and for some obscure reason, you obtained these sentences: ```python sentences = ["That is my job I a", "m a medical doct", "I save lives"] ``` * You now wish to correct the sentence **"m a medical doct"**. Here is the single preprocessing step for the model: ```python input_text = "repair_sentence: " + sentences[1] + " context: {" + sentences[0] + "}{" + sentences[2] + "} </s>" ``` **Explanation**:</br> * We are telling the model to repair the sentence with the prefix "repair_sentence: " * Then append the sentence we want to repair **sentence[1]** which is "m a medical doct" * Next we give some context to the model. In the case, the context is some text that occured before the sentence and some text that appeard after the sentence in the original text. * To do that, we append the keyword "context :" * Append **{sentence[0]}** "{That is my job I a}". (Note how it is sourrounded by curly braces). * Append **{sentence[2]}** "{I save lives}". * At last we tell the model this is the end of the input with </s>. ```python print(input_text) # repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s> ``` <br/> **The context is optional**, so the input could also be ```repair_sentence: m a medical doct context: {}{} </s>``` ### 6.2 Inference ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/t5-base-multi-sentence-doctor") model = AutoModelWithLMHead.from_pretrained("flexudy/t5-base-multi-sentence-doctor") input_text = "repair_sentence: m a medical doct context: {That is my job I a}{or I save lives} </s>" input_ids = tokenizer.encode(input_text, return_tensors="pt") outputs = model.generate(input_ids, max_length=32, num_beams=1) sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) assert sentence == "I am a medical doctor." ``` ## 7. Fine-tuning We also provide a script `train_any_t5_task.py` that might help you fine-tune any Text2Text Task with T5. We added #TODO comments all over to help you use train with ease. For example: ```python # TODO Set your training epochs config.TRAIN_EPOCHS = 3 ``` If you don't want to read the #TODO comments, just pass in your data like this ```python # TODO Where is your data ? Enter the path trainer.start("data/sentence_doctor_dataset_300.csv") ``` and voila!! Please feel free to correct any mistakes in the code and make a pull request. ## 8. Attribution * [Huggingface](https://huggingface.co/) transformer lib for making this possible * Abhishek Kumar Mishra's transformer [tutorial](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) on text summarisation. Our training code is just a modified version of their code. So many thanks. * We finetuned this model from the huggingface hub: WikinewsSum/t5-base-multi-combine-wiki-news. Thanks to the [authors](https://huggingface.co/WikinewsSum) * We also read a lot of work from [Suraj Patil](https://github.com/patil-suraj) * No one has been forgotten, hopefully :)
facebook/FairBERTa
facebook
"2023-03-18T02:00:22Z"
63,325
5
pytorch
[ "pytorch", "roberta", "counterfactual", "perturb", "fairness", "nlp", "demographic", "diverse", "gender", "non-binary", "race", "age", "en", "license:mit", "region:us" ]
null
"2023-01-31T22:41:23Z"
--- language: - en license: - mit library_name: "pytorch" multilinguality: - monolingual pretty_name: FairBERTa tags: - counterfactual - perturb - fairness - nlp - demographic - diverse - gender - non-binary - race - age metrics: - bleu --- # FairBERTa FairBERTa is the first large language model trained on demographically perturbed corpora. Compared to RoBERTa, FairBERTa's fairness is improved, while its performance does not degrade on downstream tasks. - **Repository:** https://github.com/facebookresearch/ResponsibleNLP/ - **Paper:** https://aclanthology.org/2022.emnlp-main.646/ - **Point of Contact:** rebeccaqian@meta.com, ccross@meta.com, douwe@huggingface.co, adinawilliams@meta.com - **License:** MIT ## Model Description FairBERTa is a transformers model pretrained on a large corpus of English data with the Masked language modeling (MLM) objective. The model randomly masks 15% of words in the input sequence then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. The model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. ### Model Summary FairBERTa can be finetuned on a variety of downstream tasks. FairBERTa is trained using the FairSeq library, with the same parameters as the RoBERTa-base model. ### Uses The perturber is intended for use by fairness researchers and engineers working on demographic debiasing applications. The perturber is a controllable generation model that given a word, target demographic attribute and input text, outputs text where the selected word and associated references are rewritten to the target demographic attribute. Control variables and the input text are separated by a <PERT_SEP> token. ## Training The FairBERTa model was pretrained in a similar manner to [RoBERTa](https://huggingface.co/roberta-base). It was pretrained on 160GB of perturbed datasets. The training data consists of five sources: BookCorpus, English Wikipedia, CC-News, OpenWebText, Stories. We sample chunks of 512 token sequences (the perturber's max context window), select a ## Bias, Risks & Limitations FairBERTa shows improved performance compared to RoBERTa on a variety of fairness metrics. For an in-depth discussion of bias, risks and limitations, see the Results and Limitations sections of [our paper](https://aclanthology.org/2022.emnlp-main.646/). ## Citation ``` @inproceedings{qian-etal-2022-perturbation, title = "Perturbation Augmentation for Fairer {NLP}", author = "Qian, Rebecca and Ross, Candace and Fernandes, Jude and Smith, Eric Michael and Kiela, Douwe and Williams, Adina", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.646", pages = "9496--9521", abstract = "Unwanted and often harmful social biases are becoming ever more salient in NLP research, affecting both models and datasets. In this work, we ask whether training on demographically perturbed data leads to fairer language models. We collect a large dataset of human annotated text perturbations and train a neural perturbation model, which we show outperforms heuristic alternatives. We find that (i) language models (LMs) pre-trained on demographically perturbed corpora are typically more fair, and (ii) LMs finetuned on perturbed GLUE datasets exhibit less demographic bias on downstream tasks, and (iii) fairness improvements do not come at the expense of performance on downstream tasks. Lastly, we discuss outstanding questions about how best to evaluate the (un)fairness of large language models. We hope that this exploration of neural demographic perturbation will help drive more improvement towards fairer NLP.", } ``` ### Model Card Contact Thanks to [@Rebecca-Qian](https://github.com/Rebecca-Qian) for adding this model.
timm/maxvit_base_tf_512.in21k_ft_in1k
timm
"2023-05-10T23:59:59Z"
63,293
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2204.01697", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-02T21:50:28Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for maxvit_base_tf_512.in21k_ft_in1k An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 119.9 - GMACs: 138.0 - Activations (M): 704.0 - Image size: 512 x 512 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_base_tf_512.in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_512.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 256, 256]) # torch.Size([1, 96, 128, 128]) # torch.Size([1, 192, 64, 64]) # torch.Size([1, 384, 32, 32]) # torch.Size([1, 768, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_512.in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 16, 16) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
Jean-Baptiste/camembert-ner
Jean-Baptiste
"2023-06-01T01:32:51Z"
63,261
103
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- language: fr datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Je m'appelle jean-baptiste et je vis à montréal" - text: "george washington est allé à washington" license: mit --- # camembert-ner: model fine-tuned from camemBERT for NER task. ## Introduction [camembert-ner] is a NER model that was fine-tuned from camemBERT on wikiner-fr dataset. Model was trained on wikiner-fr dataset (~170 634 sentences). Model was validated on emails/chat data and overperformed other models on this type of data specifically. In particular the model seems to work better on entity that don't start with an upper case. ## Training data Training data was classified as follow: Abbreviation|Description -|- O |Outside of a named entity MISC |Miscellaneous entity PER |Person’s name ORG |Organization LOC |Location ## How to use camembert-ner with HuggingFace ##### Load camembert-ner and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner") model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner") ##### Process text sample (from wikipedia) from transformers import pipeline nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple") nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.") [{'entity_group': 'ORG', 'score': 0.9472818374633789, 'word': 'Apple', 'start': 0, 'end': 5}, {'entity_group': 'PER', 'score': 0.9838564991950989, 'word': 'Steve Jobs', 'start': 74, 'end': 85}, {'entity_group': 'LOC', 'score': 0.9831605950991312, 'word': 'Los Altos', 'start': 87, 'end': 97}, {'entity_group': 'LOC', 'score': 0.9834540486335754, 'word': 'Californie', 'start': 100, 'end': 111}, {'entity_group': 'PER', 'score': 0.9841555754343668, 'word': 'Steve Jobs', 'start': 115, 'end': 126}, {'entity_group': 'PER', 'score': 0.9843501806259155, 'word': 'Steve Wozniak', 'start': 127, 'end': 141}, {'entity_group': 'PER', 'score': 0.9841533899307251, 'word': 'Ronald Wayne', 'start': 144, 'end': 157}, {'entity_group': 'ORG', 'score': 0.9468960364659628, 'word': 'Apple Computer', 'start': 243, 'end': 257}] ``` ## Model performances (metric: seqeval) Overall precision|recall|f1 -|-|- 0.8859|0.8971|0.8914 By entity entity|precision|recall|f1 -|-|-|- PER|0.9372|0.9598|0.9483 ORG|0.8099|0.8265|0.8181 LOC|0.8905|0.9005|0.8955 MISC|0.8175|0.8117|0.8146 For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails: https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
google/bigbird-roberta-base
google
"2021-06-02T14:30:54Z"
63,238
46
transformers
[ "transformers", "pytorch", "jax", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - cc_news --- # BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
ayameRushia
"2021-12-22T08:52:47Z"
63,171
12
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "id", "dataset:indonlu", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: mit tags: - generated_from_trainer datasets: - indonlu metrics: - accuracy model-index: - name: bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa results: - task: name: Text Classification type: text-classification dataset: name: indonlu type: indonlu args: smsa metrics: - name: Accuracy type: accuracy value: 0.9373015873015873 language: id widget: - text: "Saya mengapresiasi usaha anda" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-finetuned-sentiment-analysis-smsa This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the indonlu dataset. It achieves the following results on the evaluation set: - Loss: 0.3390 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2864 | 1.0 | 688 | 0.2154 | 0.9286 | | 0.1648 | 2.0 | 1376 | 0.2238 | 0.9357 | | 0.0759 | 3.0 | 2064 | 0.3351 | 0.9365 | | 0.044 | 4.0 | 2752 | 0.3390 | 0.9373 | | 0.0308 | 5.0 | 3440 | 0.4346 | 0.9365 | | 0.0113 | 6.0 | 4128 | 0.4708 | 0.9365 | | 0.006 | 7.0 | 4816 | 0.5533 | 0.9325 | | 0.0047 | 8.0 | 5504 | 0.5888 | 0.9310 | | 0.0001 | 9.0 | 6192 | 0.5961 | 0.9333 | | 0.0 | 10.0 | 6880 | 0.5992 | 0.9357 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
tiiuae/falcon-40b-instruct
tiiuae
"2023-09-29T14:32:27Z"
63,030
1,172
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "custom_code", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2205.14135", "arxiv:1911.02150", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:2306.01116", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-25T10:14:36Z"
--- datasets: - tiiuae/falcon-refinedweb language: - en inference: false license: apache-2.0 --- # ✨ Falcon-40B-Instruct **Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)! ## Why use Falcon-40B-Instruct? * **You are looking for a ready-to-use chat/instruct model based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b).** * **Falcon-40B is the best open-source model available.** It outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). 💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b). 💸 **Looking for a smaller, less expensive model?** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) is Falcon-40B-Instruct's little brother! ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-40b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). You will need **at least 85-100GB of memory** to swiftly run inference with Falcon-40B. # Model Card for Falcon-40B-Instruct ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English and French; - **License:** Apache 2.0; - **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b). ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Falcon-40B-Instruct has been finetuned on a chat dataset. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-40b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon-40B-Instruct was finetuned on a 150M tokens from [Bai ze](https://github.com/project-baize/baize-chatbot) mixed with 5% of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) data. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer. ## Evaluation *Paper coming soon.* See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results. ## Technical Specifications For more information about pretraining, see [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b). ### Model Architecture and Objective Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)); * **Decoder-block:** parallel attention/MLP with a single layer norm. For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 60 | | | `d_model` | 8192 | | | `head_dim` | 64 | Reduced to optimise for FlashAttention | | Vocabulary | 65024 | | | Sequence length | 2048 | | ### Compute Infrastructure #### Hardware Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances. #### Software Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.) ## Citation *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite: ``` @article{falcon40b, title={{Falcon-40B}: an open large language model with state-of-the-art performance}, author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme}, year={2023} } ``` To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116). ``` @article{refinedweb, title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only}, author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay}, journal={arXiv preprint arXiv:2306.01116}, eprint={2306.01116}, eprinttype = {arXiv}, url={https://arxiv.org/abs/2306.01116}, year={2023} } ``` To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction dataset used for this model: ``` @article{xu2023baize, title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data}, author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian}, journal={arXiv preprint arXiv:2304.01196}, year={2023} } ``` ## License Falcon-40B-Instruct is made available under the Apache 2.0 license. ## Contact falconllm@tii.ae
unslothai/colabpro
unslothai
"2024-07-06T22:41:13Z"
63,002
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-03-31T16:39:57Z"
--- {} --- We log statistics to see if any envs are breaking
EleutherAI/pythia-2.8b
EleutherAI
"2023-06-09T00:35:37Z"
62,901
28
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:EleutherAI/pile", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-02-13T14:37:12Z"
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-2.8B ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-2.8B for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-2.8B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-2.8B has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-2.8B will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-2.8B may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-2.8B. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-2.8B. ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
trl-internal-testing/tiny-random-idefics2
trl-internal-testing
"2024-08-16T10:27:51Z"
62,245
0
transformers
[ "transformers", "safetensors", "idefics2", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T16:20:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ```python from transformers import Idefics2Config, AutoProcessor, MistralConfig, Idefics2ForConditionalGeneration from transformers.models.idefics2.configuration_idefics2 import Idefics2VisionConfig config = Idefics2Config( text_config=MistralConfig( max_position_embeddings=4096 * 8, vocab_size=32003, hidden_size=4 * 8, num_attention_heads=8, intermediate_size=16, num_hidden_layers=2, ), vision_config=Idefics2VisionConfig( hidden_size=8 * 4, num_attention_heads=4, num_hidden_layers=2, intermediate_size=16, ), ) model = Idefics2ForConditionalGeneration(config=config) processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b") model.push_to_hub("trl-internal-testing/tiny-random-idefics2") processor.push_to_hub("trl-internal-testing/tiny-random-idefics2") ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lmsys/vicuna-13b-v1.5
lmsys
"2024-03-17T21:09:21Z"
62,215
206
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2307.09288", "arxiv:2306.05685", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-29T04:44:46Z"
--- inference: false license: llama2 --- # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288) ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api ## Training Details Vicuna v1.5 is fine-tuned from Llama 2 with supervised instruction fine-tuning. The training data is around 125K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation ![Evaluation Results](https://github.com/lm-sys/lm-sys.github.io/blob/main/public/images/webdata/vicuna_v1.5_eval.png?raw=true) Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
google/vit-large-patch16-224
google
"2022-06-23T07:50:15Z"
62,195
23
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet-1k - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at the same resolution, 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-224') model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
neuralmagic
"2024-08-13T21:30:31Z"
62,165
13
transformers
[ "transformers", "safetensors", "llama", "text-generation", "int4", "vllm", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2210.17323", "base_model:meta-llama/Meta-Llama-3.1-70B-Instruct", "base_model:quantized:meta-llama/Meta-Llama-3.1-70B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
"2024-07-31T17:28:54Z"
--- tags: - int4 - vllm language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-70B-Instruct --- # Meta-Llama-3.1-70B-Instruct-quantized.w4a16 ## Model Overview - **Model Architecture:** Meta-Llama-3 - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** INT4 - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct), this models is intended for assistant-like chat. - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. - **Release Date:** 7/26/2024 - **Version:** 1.0 - **License(s):** Llama3.1 - **Model Developers:** Neural Magic Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct). It achieves an average score of 78.54 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 78.67. ### Model Optimizations This model was obtained by quantizing the weights of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT4 and floating point representations of the quantized weights. The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library. GPTQ used a 1% damping factor and 512 sequences of 8,192 random tokens. ## Deployment This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16" number_gpus = 1 max_model_len = 8192 sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len) outputs = llm.generate(prompts, sampling_params) generated_text = outputs[0].outputs[0].text print(generated_text) ``` vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below. Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ. ```python from transformers import AutoTokenizer from datasets import Dataset from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot from llmcompressor.modifiers.quantization import GPTQModifier import random model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct" num_samples = 512 max_seq_len = 8192 tokenizer = AutoTokenizer.from_pretrained(model_id) preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)} dataset_name = "neuralmagic/LLM_compression_calibration" dataset = load_dataset(dataset_name, split="train") ds = dataset.shuffle().select(range(num_samples)) ds = ds.map(preprocess_fn) recipe = GPTQModifier( targets="Linear", scheme="W4A16", ignore=["lm_head"], dampening_frac=0.01, ) model = SparseAutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True, ) oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=max_seq_len, num_calibration_samples=num_samples, ) model.save_pretrained("Meta-Llama-3.1-70B-Instruct-quantized.w4a16") ``` ## Evaluation The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command: ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ --tasks openllm \ --batch_size auto ``` ### Accuracy #### Open LLM Leaderboard evaluation scores <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Meta-Llama-3.1-70B-Instruct </strong> </td> <td><strong>hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4</strong> </td> <td><strong>Meta-Llama-3.1-70B-Instruct-quantized.w4a16 (this model)</strong> </td> <td><strong>Recovery (this model) </strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>82.21 </td> <td>81.42 </td> <td>81.84 </td> <td>99.55% </td> </tr> <tr> <td>ARC Challenge (25-shot) </td> <td>70.65 </td> <td>70.13 </td> <td>70.05 </td> <td>99.15% </td> </tr> <tr> <td>GSM-8K (5-shot, strict-match) </td> <td>87.95 </td> <td>90.59 </td> <td>89.84 </td> <td>102.15% </td> </tr> <tr> <td>Hellaswag (10-shot) </td> <td>86.33 </td> <td>86.23 </td> <td>86.24 </td> <td>99.90% </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>85.00 </td> <td>84.53 </td> <td>84.53 </td> <td>99.45% </td> </tr> <tr> <td>TruthfulQA (0-shot) </td> <td>59.90 </td> <td>59.62 </td> <td>58.74 </td> <td>98.06% </td> </tr> <tr> <td><strong>Average</strong> </td> <td><strong>78.67</strong> </td> <td><strong>78.75</strong> </td> <td><strong>78.54</strong> </td> <td><strong>99.83%</strong> </td> </tr> </table>
microsoft/Phi-3-medium-4k-instruct
microsoft
"2024-08-20T19:57:51Z"
61,951
207
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-07T15:27:19Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- 🎉 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) ## Model Summary The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-4K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-4K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-4k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-4K-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|50.2|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|78.0|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|81.4|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|55.8|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|82.4|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.6|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.7|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.8|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|69.9|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.4|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.9|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|80.2|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|75.1|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|81.5|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|91.0|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|62.2|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|75.2|68.9|70.7|81.3|77.8|73.2|86.7| |Average|78.5|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |Popular aggregated benchmark|75.4|69.9|73.4|76.3|67.0|67.5|80.5| |Reasoning|84.1|79.3|81.5|86.7|78.3|80.4|89.3| |Language understanding|73.9|75.6|78.1|76.9|68.7|76.2|80.7| |Code generation|66.1|68.6|60.0|69.3|70.4|66.7|76.1| |Math|52.8|45.3|52.5|59.7|52.8|50.9|67.1| |Factual knowledge|48.3|60.3|60.6|52.4|63.4|54.6|45.9| |Multilingual|62.9|67.8|69.8|62.0|67.0|73.4|78.2| |Robustness|66.5|57.9|65.5|78.7|69.3|69.7|84.6| ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
tokyotech-llm/Swallow-7b-instruct-hf
tokyotech-llm
"2024-08-17T09:11:41Z"
61,889
40
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "ja", "arxiv:2404.17790", "arxiv:2404.17733", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-07T02:18:36Z"
--- language: - en - ja library_name: transformers pipeline_tag: text-generation license: llama2 model_type: llama --- # Swallow Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT). Links to other models can be found in the index. # Model Release Updates We are excited to share the release schedule for our latest models: - **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions. - **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf). - **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf). - **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf) - **December 19, 2023**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf). ## Swallow Model Index |Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1| |---|---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)| |7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A | |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)| |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)| ## Swallow Model Index NVE (No Vocabulary Expansion) |Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf| |---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)| |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A | |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)| ![logo](./logo.png) This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/). Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790) ## Model Details * **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture. * **Language(s)**: Japanese English * **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2) * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process. * **Contact**: swallow[at]nlp.c.titech.ac.jp ## Base Model Performance ### Japanese tasks |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en| |---|---|---|---|---|---|---|---|---|---| | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot| | Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 | | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 | | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 | | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 | | Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** | | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 | | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 | ### English tasks |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K| |---|---|---|---|---|---|---|---| | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot| | Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 | | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 | | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 | | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 | | Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** | | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 | | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 | ## Evaluation Benchmarks ### Japanese evaluation benchmarks We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows: - Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022]) - Open-ended question answering (JEMHopQA [Ishii+, 2023]) - Open-ended question answering (NIILC [Sekine, 2003]) - Machine reading comprehension (JSQuAD [Kurihara+, 2022]) - Automatic summarization (XL-Sum [Hasan+, 2021]) - Machine translation (WMT2020 ja-en [Barrault+, 2020]) - Machine translation (WMT2020 en-ja [Barrault+, 2020]) - Mathematical reasoning (MGSM [Shi+, 2023]) ### English evaluation benchmarks We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows: - Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018]) - Open-ended question answering (TriviaQA [Joshi+, 2017]) - Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018]) - Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021]) - Natural language inference (HellaSwag [Zellers+, 2019]) - Mathematical reasoning (GSM8k [Cobbe+, 2021]) ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install -r requirements.txt ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tokyotech-llm/Swallow-7b-instruct-hf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto") PROMPT_DICT = { "prompt_input": ( "以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:" ), "prompt_no_input": ( "以下に、あるタスクを説明する指示があります。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 応答:" ), } def create_prompt(instruction, input=None): """ Generates a prompt based on the given instruction and an optional input. If input is provided, it uses the 'prompt_input' template from PROMPT_DICT. If no input is provided, it uses the 'prompt_no_input' template. Args: instruction (str): The instruction describing the task. input (str, optional): Additional input providing context for the task. Default is None. Returns: str: The generated prompt. """ if input: # Use the 'prompt_input' template when additional input is provided return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input) else: # Use the 'prompt_no_input' template when no additional input is provided return PROMPT_DICT["prompt_no_input"].format(instruction=instruction) # Example usage instruction_example = "以下のトピックに関する詳細な情報を提供してください。" input_example = "東京工業大学の主なキャンパスについて教えてください" prompt = create_prompt(instruction_example, input_example) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ### Use the base model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tokyotech-llm/Swallow-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") prompt = "東京工業大学の主なキャンパスは、" input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ## Training Datasets ### Continual Pre-Training The following datasets were used for continual pre-training. - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [Swallow Corpus](https://arxiv.org/abs/2404.17733) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) ### Instruction Tuning The following datasets were used for the instruction tuning. - [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) - [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja) ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Acknowledgements We thank Meta Research for releasing Llama 2 under an open license for others to build on. Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology. ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. ## Authors Here are the team members: - From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members: - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html) - [Sakae Mizuki](https://s-mizuki-nlp.github.io/) - [Hiroki Iida](https://meshidenn.github.io/) - [Mengsay Loem](https://loem-ms.github.io/) - [Shota Hirai](https://huggingface.co/Kotemo428) - [Kakeru Hattori](https://aya-se.vercel.app/) - [Masanari Ohi](https://twitter.com/stjohn2007) - From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members: - [Rio Yokota](https://twitter.com/rioyokota) - [Kazuki Fujii](https://twitter.com/okoge_kaz) - [Taishi Nakamura](https://twitter.com/Setuna7777_2) ## How to cite If you find our work helpful, please feel free to cite us. ``` @inproceedings{Fujii:COLM2024, title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities}, author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki}, booktitle="Proceedings of the First Conference on Language Modeling", series={COLM}, pages="(to appear)", year="2024", month=oct, address={University of Pennsylvania, USA}, } @inproceedings{Okazaki:COLM2024, title={Building a Large Japanese Web Corpus for Large Language Models}, author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Rio Yokota and Sakae Mizuki}, booktitle="Proceedings of the First Conference on Language Modeling", series={COLM}, pages="(to appear)", year="2024", month=oct, address={University of Pennsylvania, USA}, } ```
timm/resnet50.ram_in1k
timm
"2024-02-10T23:39:28Z"
61,888
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1512.03385", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T18:14:08Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for resnet50.ram_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * AugMix (with RandAugment) recipe * SGD (w/ Nesterov) optimizer and JSD (Jensen–Shannon divergence) loss * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.6 - GMACs: 4.1 - Activations (M): 11.1 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet50.ram_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.ram_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.ram_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ```
persiannlp/mt5-small-parsinlu-opus-translation_fa_en
persiannlp
"2021-09-23T16:20:36Z"
61,793
1
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "machine-translation", "persian", "farsi", "fa", "multilingual", "dataset:parsinlu", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - machine-translation - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - sacrebleu --- # Machine Translation (ترجمه‌ی ماشینی) This is an mT5-based model for machine translation (Persian -> English). Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("ستایش خدای را که پروردگار جهانیان است.") run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه می‌کند؛") run_model("وی از تمامی بلاگرها، سازمان‌ها و افرادی که از وی پشتیبانی کرده‌اند، تشکر کرد.") run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ") run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
sudo-ai/zero123plus-v1.2
sudo-ai
"2024-02-01T11:30:40Z"
61,785
12
diffusers
[ "diffusers", "safetensors", "diffusers:Zero123PlusPipeline", "region:us" ]
null
"2023-12-11T03:26:33Z"
Entry not found
llamafactory/tiny-random-Llama-3
llamafactory
"2024-06-15T10:15:08Z"
61,771
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T17:30:09Z"
--- license: apache-2.0 library_name: transformers inference: false --- A tiny version of https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
jameslahm/yolov10n
jameslahm
"2024-08-27T05:26:55Z"
61,706
8
yolov10
[ "yolov10", "safetensors", "object-detection", "computer-vision", "pytorch_model_hub_mixin", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
object-detection
"2024-06-01T10:36:36Z"
--- license: agpl-3.0 tags: - object-detection - computer-vision - yolov10 - pytorch_model_hub_mixin datasets: - detection-datasets/coco library_name: yolov10 inference: false --- ### Model Description [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1) - arXiv: https://arxiv.org/abs/2405.14458v1 - github: https://github.com/THU-MIG/yolov10 ### Installation ``` pip install git+https://github.com/THU-MIG/yolov10.git ``` ### Training and validation ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10n') # Training model.train(...) # after training, one can push to the hub model.push_to_hub("your-hf-username/yolov10-finetuned") # Validation model.val(...) ``` ### Inference Here's an end-to-end example showcasing inference on a cats image: ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10n') source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) ``` which shows: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628ece6054698ce61d1e7be3/tBwAsKcQA_96HCYQp7BRr.png) ### BibTeX Entry and Citation Info ``` @article{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang}, journal={arXiv preprint arXiv:2405.14458}, year={2024} } ```
dmis-lab/TinySapBERT-from-TinyPubMedBERT-v1.0
dmis-lab
"2023-03-17T06:06:16Z"
61,702
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-11-11T14:26:28Z"
This model repository presents "TinySapBERT", tiny-sized biomedical entity representations (language model) trained using [official SapBERT code and instructions (Liu et al., NAACL 2021)](https://github.com/cambridgeltl/sapbert). We used our [TinyPubMedBERT](https://huggingface.co/dmis-lab/TinyPubMedBERT-v1.0), a tiny-sized LM, as an initial starting point to train using the SapBERT scheme. <br> cf) TinyPubMedBERT is a distillated [PubMedBERT (Gu et al., 2021)](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract), open-sourced along with the release of the KAZU (Korea University and AstraZeneca) framework. * For details, please visit [KAZU framework](https://github.com/AstraZeneca/KAZU) or see our paper entitled **Biomedical NER for the Enterprise with Distillated BERN2 and the Kazu Framework**, (EMNLP 2022 industry track). * For the demo of KAZU framework, please visit http://kazu.korea.ac.kr ### Citation info Joint-first authorship of **Richard Jackson** (AstraZeneca) and **WonJin Yoon** (Korea University). <br>Please cite the simplified version using the following section, or find the [full citation information here](https://aclanthology.org/2022.emnlp-industry.63.bib) ``` @inproceedings{YoonAndJackson2022BiomedicalNER, title="Biomedical {NER} for the Enterprise with Distillated {BERN}2 and the Kazu Framework", author="Yoon, Wonjin and Jackson, Richard and Ford, Elliot and Poroshin, Vladimir and Kang, Jaewoo", booktitle="Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track", month = dec, year = "2022", address = "Abu Dhabi, UAE", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-industry.63", pages = "619--626", } ``` The model used resources of [SapBERT paper](https://aclanthology.org/2021.naacl-main.334.pdf). We appreciate the authors for making the resources publicly available! ``` Liu, Fangyu, et al. "Self-Alignment Pretraining for Biomedical Entity Representations." Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021. ``` ### Contact Information For help or issues using the codes or model (NER module of KAZU) in this repository, please contact WonJin Yoon (wonjin.info (at) gmail.com) or submit a GitHub issue.
timm/eva_large_patch14_196.in22k_ft_in22k_in1k
timm
"2024-02-10T23:27:54Z"
61,698
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
"2022-12-22T07:08:20Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k - imagenet-22k --- # Model card for eva_large_patch14_196.in22k_ft_in22k_in1k An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.1 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 196 x 196 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** - ImageNet-22k - ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_large_patch14_196.in22k_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_large_patch14_196.in22k_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
codellama/CodeLlama-7b-hf
codellama
"2024-04-12T14:17:26Z"
61,658
316
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-2", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-08-24T16:31:11Z"
--- language: - code pipeline_tag: text-generation tags: - llama-2 license: llama2 --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. > [!NOTE] > This is a non-official Code Llama repo. You can find the official Meta repository in the [Meta Llama organization](https://huggingface.co/meta-llama/CodeLlama-7b-hf). | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | | 70B | [codellama/CodeLlama-70b-hf](https://huggingface.co/codellama/CodeLlama-70b-hf) | [codellama/CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf) | [codellama/CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install transformers accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base model of 7B parameters.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or it's [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
microsoft/Phi-3-medium-128k-instruct
microsoft
"2024-08-20T19:58:08Z"
61,599
360
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "multilingual", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-07T15:27:32Z"
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- 🎉 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) ## Model Summary The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require : 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai). ### Tokenizer Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_id = "microsoft/Phi-3-medium-128k-instruct" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128k tokens * GPUs: 512 H100-80G * Training time: 42 days * Training data: 4.8T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. * Release dates: The model weight is released on May 21, 2024. ### Datasets Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report). ## Benchmarks We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat). All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------| |AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6| |MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0| |BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7| |ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7| |HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3| |ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6| |ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8| |BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3| |CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7| |MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7| |OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4| |PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1| |Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7| |TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2| |WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7| |TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3| |GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2| |HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9| |MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7| |Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2| We take a closer look at different categories across 80 public benchmark datasets at the table below: |Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)| |--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------| | Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 | | Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 | | Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 | | Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 | | Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 | | Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 | | Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 | | Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda) ## Cross Platform Support ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware. Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA). Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
wangfuyun/AnimateLCM
wangfuyun
"2024-02-26T06:55:34Z"
61,594
279
diffusers
[ "diffusers", "safetensors", "text-to-video", "arxiv:2402.00769", "region:us" ]
text-to-video
"2024-02-03T05:59:46Z"
--- pipeline_tag: text-to-video --- # AnimateLCM for Fast Video Generation in 4 steps. [AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al. ## We also support fast image-to-video generation, please see [AnimateLCM-SVD-xt](https://huggingface.co/wangfuyun/AnimateLCM-SVD-xt) and [AnimateLCM-I2V](https://huggingface.co/wangfuyun/AnimateLCM-I2V). For more details, please refer to our [[paper](https://arxiv.org/abs/2402.00769)] | [[code](https://github.com/G-U-N/AnimateLCM)] | [[proj-page](https://animatelcm.github.io/)] | [[civitai](https://civitai.com/models/290375/animatelcm-fast-video-generation)]. <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63e9e92f20c109718713f5eb/KCwSoZCdxkkmtDg1LuXsP.mp4"></video> ## Using AnimateLCM with Diffusers ```python import torch from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter from diffusers.utils import export_to_gif adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=torch.float16) pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm-lora") pipe.set_adapters(["lcm-lora"], [0.8]) pipe.enable_vae_slicing() pipe.enable_model_cpu_offload() output = pipe( prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", negative_prompt="bad quality, worse quality, low resolution", num_frames=16, guidance_scale=2.0, num_inference_steps=6, generator=torch.Generator("cpu").manual_seed(0), ) frames = output.frames[0] export_to_gif(frames, "animatelcm.gif") ```
timm/vit_large_patch14_reg4_dinov2.lvd142m
timm
"2024-02-09T17:59:52Z"
61,328
2
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "arxiv:2309.16588", "arxiv:2304.07193", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2023-10-30T04:52:17Z"
--- license: apache-2.0 library_name: timm tags: - image-feature-extraction - timm --- # Model card for vit_large_patch14_reg4_dinov2.lvd142m A Vision Transformer (ViT) image feature model with registers. Pretrained on LVD-142M with self-supervised DINOv2 method. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 416.1 - Activations (M): 305.3 - Image size: 518 x 518 - **Papers:** - Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588 - DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Original:** https://github.com/facebookresearch/dinov2 - **Pretrain Dataset:** LVD-142M ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch14_reg4_dinov2.lvd142m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch14_reg4_dinov2.lvd142m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1374, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{darcet2023vision, title={Vision Transformers Need Registers}, author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr}, journal={arXiv preprint arXiv:2309.16588}, year={2023} } ``` ```bibtex @misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr}, journal={arXiv:2304.07193}, year={2023} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
AdamCodd
"2023-11-10T17:35:29Z"
61,093
6
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "distilbert", "text-classification", "dataset:amazon_polarity", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-10-06T23:02:11Z"
--- license: apache-2.0 datasets: - amazon_polarity base_model: distilbert-base-uncased model-index: - name: distilbert-base-uncased-finetuned-sentiment-amazon results: - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: sentiment args: default metrics: - type: accuracy value: 0.961 name: Accuracy - type: loss value: 0.116 name: Loss - type: f1 value: 0.960 name: F1 - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: amazon_polarity config: amazon_polarity split: test metrics: - type: accuracy value: 0.94112 name: Accuracy verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzlmMzdhYjNmN2U0NDBkM2U5ZDgwNzc3YjE1OGE4MWUxMDY1N2U0ODc0YzllODE5ODIyMzdkOWFhNzVjYmI5MyIsInZlcnNpb24iOjF9.3nlcLa4IpPQtklp7_U9XzC__Q_JVf_cWs6JVVII8trhX5zg_q9HEyQOQs4sRf6O-lIJg8zb3mgobZDJShuSJAQ - type: precision value: 0.9321570625232675 name: Precision verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjI2MDY4NGNlYjhjMGMxODBiNTc2ZjM5YzY1NjkxNTU4MDA2ZDIyY2QyZjUyZmE4YWY0N2Y1ODU5YTc2ZDM0NiIsInZlcnNpb24iOjF9.egEikTa2UyHV6SAGkHJKaa8FRwGHoZmJRCmqUQaJqeF5yxkz2V-WeCHoWDrCXsHCbXEs8UhLlyo7Lr83BPfkBg - type: recall value: 0.95149 name: Recall verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E3M2Y3MDU4ZTM2YjdlZjQ0NTY3NGYwMmQ3NTk5ZmZkZWUwZWZiZDZjNjk2ZWE5MmY4MmZiM2FmN2U2M2QyNCIsInZlcnNpb24iOjF9.4VNbiWRmSee4cxuIZ5m7bN30i4BpK7xtHQ1BF8AuFIXkWQgzOmGdX35bLhLGWW8KL3ClA4RDPVBKYCIrw0YUBw - type: auc value: 0.9849019044624999 name: AUC verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkwODk2ZTUwOTViNjBhYTU0ODk1MDA3MDY1NDkyZDc2YmRlNTQzNDE3YmE3YTVkYjNhN2JmMDAxZWQ0NjUxZSIsInZlcnNpb24iOjF9.YEr6OhqOL7QnqYqjUTQFMdkgU_uS1-vVnkJtn_-1UwSoX754UV_bL9S9KSH3DX4m5QFoRXdZxfeOocm1JbzaCA - type: f1 value: 0.9417243188138998 name: F1 verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIyMmViNTQ3ZGU0M2I5ZmRjOGI1OWMwZGEwYmE5OGU5YTZlZTkzZjdkOTQ4YzJmOTc2MDliMDY4NDQ1NGRlNyIsInZlcnNpb24iOjF9.p05MGHTfHTAzp4u-qfiIn6Zmh5c3TW_uwjXWgbb982pL_oCILQb6jFXqhPpWXL321fPye7qaUVbGhcTJd8sdCA - type: loss value: 0.16342754662036896 name: loss verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDc4M2IxYjhkNjRhZmYyNzY1MTNkNzhmYjk2NmU1NjFiOTk1NDIzNzI1ZGU3MDYyYjQ2YmQ1NTI2N2NhMyIsInZlcnNpb24iOjF9.Zuf0nzn8XdvwRChKtE9CwJ0pgpc6Zey6oTR3jRiSkvNY2sNbo2bvAgFimGzgGYkDvRvYkTCXzCyxdb27l3QnAg --- # distilbert-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a subset of the [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity). <b>[Update 10/10/23]</b> The model has been retrained on a larger part of the dataset with an improvement on the loss, f1 score and accuracy. It achieves the following results on the evaluation set: - Loss: 0.116 - Accuracy: 0.961 - F1_score: 0.960 ## Model description This sentiment classifier has been trained on 360_000 samples for the training set, 40_000 samples for the validation set and 40_000 samples for the test set. ## Intended uses & limitations ```python from transformers import pipeline # Create the pipeline sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon') # Now you can use the pipeline to get the sentiment result = sentiment_classifier("This product doesn't fit me at all.") print(result) #[{'label': 'negative', 'score': 0.9994848966598511}] ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1270 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 2 - weight_decay: 0.01 ### Training results (Previous results before retraining from the model evaluator) | key | value | | --- | ----- | | eval_accuracy | 0.94112 | | eval_auc | 0.9849 | | eval_f1_score | 0.9417 | | eval_precision | 0.9321 | | eval_recall | 0.95149 | ### Framework versions - Transformers 4.34.0 - Pytorch lightning 2.0.9 - Tokenizers 0.14.0 If you want to support me, you can [here](https://ko-fi.com/adamcodd).
mistralai/Mixtral-8x7B-v0.1
mistralai
"2024-07-24T14:02:01Z"
60,851
1,623
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "fr", "it", "de", "es", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-01T09:42:00Z"
--- license: apache-2.0 language: - fr - it - de - es - en tags: - moe extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>. --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Notice Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
Jean-Baptiste/roberta-large-ner-english
Jean-Baptiste
"2023-03-22T02:19:36Z"
60,826
68
transformers
[ "transformers", "pytorch", "tf", "onnx", "safetensors", "roberta", "token-classification", "en", "dataset:conll2003", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- language: en datasets: - conll2003 widget: - text: "My name is jean-baptiste and I live in montreal" - text: "My name is clara and I live in berkeley, california." - text: "My name is wolfgang and I live in berlin" train-eval-index: - config: conll2003 task: token-classification task_id: entity_extraction splits: eval_split: validation col_mapping: tokens: tokens ner_tags: tags license: mit --- # roberta-large-ner-english: model fine-tuned from roberta-large for NER task ## Introduction [roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset. Model was validated on emails/chat data and outperformed other models on this type of data specifically. In particular the model seems to work better on entity that don't start with an upper case. ## Training data Training data was classified as follow: Abbreviation|Description -|- O |Outside of a named entity MISC |Miscellaneous entity PER |Person’s name ORG |Organization LOC |Location In order to simplify, the prefix B- or I- from original conll2003 was removed. I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size: Train | Validation -|- 17494 | 3250 ## How to use roberta-large-ner-english with HuggingFace ##### Load roberta-large-ner-english and its sub-word tokenizer : ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english") model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english") ##### Process text sample (from wikipedia) from transformers import pipeline nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple") nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer") [{'entity_group': 'ORG', 'score': 0.99381506, 'word': ' Apple', 'start': 0, 'end': 5}, {'entity_group': 'PER', 'score': 0.99970853, 'word': ' Steve Jobs', 'start': 29, 'end': 39}, {'entity_group': 'PER', 'score': 0.99981767, 'word': ' Steve Wozniak', 'start': 41, 'end': 54}, {'entity_group': 'PER', 'score': 0.99956465, 'word': ' Ronald Wayne', 'start': 59, 'end': 71}, {'entity_group': 'PER', 'score': 0.9997918, 'word': ' Wozniak', 'start': 92, 'end': 99}, {'entity_group': 'MISC', 'score': 0.99956393, 'word': ' Apple I', 'start': 102, 'end': 109}] ``` ## Model performances Model performances computed on conll2003 validation dataset (computed on the tokens predictions) entity|precision|recall|f1 -|-|-|- PER|0.9914|0.9927|0.9920 ORG|0.9627|0.9661|0.9644 LOC|0.9795|0.9862|0.9828 MISC|0.9292|0.9262|0.9277 Overall|0.9740|0.9766|0.9753 On private dataset (email, chat, informal discussion), computed on word predictions: entity|precision|recall|f1 -|-|-|- PER|0.8823|0.9116|0.8967 ORG|0.7694|0.7292|0.7487 LOC|0.8619|0.7768|0.8171 By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving: entity|precision|recall|f1 -|-|-|- PER|0.9146|0.8287|0.8695 ORG|0.7655|0.6437|0.6993 LOC|0.8727|0.6180|0.7236 For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails: https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
baichuan-inc/Baichuan2-13B-Chat
baichuan-inc
"2024-02-26T08:58:32Z"
60,740
416
transformers
[ "transformers", "pytorch", "baichuan", "text-generation", "custom_code", "en", "zh", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-08-29T02:30:01Z"
--- language: - en - zh license: other tasks: - text-generation --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <h1> Baichuan 2 </h1> </div> <div align="center"> <a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a> </div> <div align="center"> 百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br> 🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉 </div> # 目录/Table of Contents - [📖 模型介绍/Introduction](#Introduction) - [⚙️ 快速开始/Quick Start](#Start) - [📊 Benchmark评估/Benchmark Evaluation](#Benchmark) - [👥 社区与生态/Community](#Community) - [📜 声明与协议/Terms and Conditions](#Terms) # 更新/Update [2023.12.29] 🎉🎉🎉 我们发布了 **[Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) v2** 版本。其中: - 大幅提升了模型的综合能力,特别是数学和逻辑推理、复杂指令跟随能力。 - 使用时需指定revision=v2.0,详细方法参考[快速开始](#Start) # <span id="Introduction">模型介绍/Introduction</span> Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits 量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表: Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/). It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size. This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model. All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:opensource@baichuan-inc.com). The specific release versions and download links are listed in the table below: | | Base Model | Chat Model | 4bits Quantized Chat Model | |:---:|:--------------------:|:--------------------:|:--------------------------:| | 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) | | 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) | # <span id="Start">快速开始/Quick Start</span> 在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。 In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", revision="v2.0", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", revision="v2.0", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", revision="v2.0") messages = [] messages.append({"role": "user", "content": "解释一下“温故而知新”"}) response = model.chat(tokenizer, messages) print(response) "温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。 这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。 ``` **注意:如需使用老版本,需手动指定revision参数,设置revision=v1.0** # <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span> 我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。 We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md). ### 7B Model Results | | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** | |:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:| | | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot | | **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 | | **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 | | **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 | | **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 | | **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 | | **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 | | **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 | | **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 | | **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 | ### 13B Model Results | | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** | |:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:| | | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot | | **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 | | **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 | | **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 | | **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 | | **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 | | **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 | | **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 | | **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 | | **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 | ## 训练过程模型/Training Dynamics 除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用 ([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化: In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU. ![checkpoint](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/checkpoints.jpeg) # <span id="Community">社区与生态/Community</span> ## Intel 酷睿 Ultra 平台运行百川大模型 使用酷睿™/至强® 可扩展处理器或配合锐炫™ GPU等进行部署[Baichuan2-7B-Chat],[Baichuan2-13B-Chat]模型,推荐使用 BigDL-LLM([CPU], [GPU])以发挥更好推理性能。 详细支持信息可参考[中文操作手册](https://github.com/intel-analytics/bigdl-llm-tutorial/tree/main/Chinese_Version),包括用notebook支持,[加载,优化,保存方法](https://github.com/intel-analytics/bigdl-llm-tutorial/blob/main/Chinese_Version/ch_3_AppDev_Basic/3_BasicApp.ipynb)等。 When deploy on Core™/Xeon® Scalable Processors or with Arc™ GPU, BigDL-LLM ([CPU], [GPU]) is recommended to take full advantage of better inference performance. # <span id="Terms">声明与协议/Terms and Conditions</span> ## 声明 我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility. ## 协议 社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况: 1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。 2. 您或您的关联方不是软件服务提供商、云服务提供商。 3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。 在符合以上条件的前提下,您需要通过以下联系邮箱 opensource@baichuan-inc.com ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。 The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions: 1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million. 2. Neither you nor your affiliates are software service providers or cloud service providers. 3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission. Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: opensource@baichuan-inc.com. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license. [GitHub]:https://github.com/baichuan-inc/Baichuan2 [Baichuan2]:https://github.com/baichuan-inc/Baichuan2 [Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B [Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base [Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat [Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits [Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base [Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base [Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat [Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits [通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F [法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97 [医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97 [数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81 [代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81 [多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91 [《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf [邮件申请]: mailto:opensource@baichuan-inc.com [Email]: mailto:opensource@baichuan-inc.com [opensource@baichuan-inc.com]: mailto:opensource@baichuan-inc.com [训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints [百川智能]: https://www.baichuan-ai.com [CPU]: https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2 [GPU]: https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan2
distil-whisper/distil-large-v2
distil-whisper
"2024-03-21T19:32:46Z"
60,613
500
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "transformers.js", "en", "arxiv:2311.00430", "arxiv:2210.13352", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-10-24T15:48:32Z"
--- language: - en tags: - audio - automatic-speech-recognition - transformers.js widget: - example_title: LibriSpeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: LibriSpeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac pipeline_tag: automatic-speech-recognition license: mit library_name: transformers --- # Distil-Whisper: distil-large-v2 Distil-Whisper was proposed in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430). It is a distilled version of the Whisper model that is **6 times faster**, 49% smaller, and performs **within 1% WER** on out-of-distribution evaluation sets. This is the repository for distil-large-v2, a distilled variant of [Whisper large-v2](https://huggingface.co/openai/whisper-large-v2). | Model | Params / M | Rel. Latency ↑ | Short-Form WER ↓ | Long-Form WER ↓ | |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------| | [large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 | **8.4** | 11.0 | | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | 9.1 | 11.7 | | | | | | | | [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) | 756 | 6.3 | 9.7 | **10.8** | | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | 11.6 | | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 | | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 | <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Update:</b> following the release of OpenAI's Whisper large-v3, an updated <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model was published. This <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model surpasses the performance of the distil-large-v2 model, with no architecture changes and better support for sequential long-form generation. Thus, it is recommended that the <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model is used in-place of the large-v2 model. </p> </div> **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the provided [training code](https://github.com/huggingface/distil-whisper/tree/main/training). We will update the [Distil-Whisper repository](https://github.com/huggingface/distil-whisper/) with multilingual checkpoints when ready! ## Usage Distil-Whisper is supported in Hugging Face 🤗 Transformers from version 4.35 onwards. To run the model, first install the latest version of the Transformers library. For this example, we'll also install 🤗 Datasets to load toy audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade transformers accelerate datasets[audio] ``` ### Short-Form Transcription The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class to transcribe short-form audio files (< 30-seconds) as follows: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v2" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```diff - result = pipe(sample) + result = pipe("audio.mp3") ``` ### Long-Form Transcription Distil-Whisper uses a chunked algorithm to transcribe long-form audio files (> 30-seconds). In practice, this chunked long-form algorithm is 9x faster than the sequential algorithm proposed by OpenAI in the Whisper paper (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)). To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For Distil-Whisper, a chunk length of 15-seconds is optimal. To activate batching, pass the argument `batch_size`: ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v2" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=15, batch_size=16, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` <!--- **Tip:** The pipeline can also be used to transcribe an audio file from a remote URL, for example: ```python result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav") ``` ---> ### Speculative Decoding Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding). Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster. This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed. In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then specify it as the "assistant model" for generation: ```python from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor import torch from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 assistant_model_id = "distil-whisper/distil-large-v2" assistant_model = AutoModelForCausalLM.from_pretrained( assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) assistant_model.to(device) model_id = "openai/whisper-large-v2" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, generate_kwargs={"assistant_model": assistant_model}, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` ## Additional Speed & Memory Improvements You can apply additional speed and memory improvements to Distil-Whisper which we cover in the following. ### Flash Attention We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention): ``` pip install flash-attn --no-build-isolation ``` and then all you have to do is to pass `use_flash_attention_2=True` to `from_pretrained`: ```diff - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, use_flash_attention_2=True) ``` ### Torch Scale-Product-Attention (SDPA) If your GPU does not support Flash Attention, we recommend making use of [BetterTransformers](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#bettertransformer). To do so, you first need to install optimum: ``` pip install --upgrade optimum ``` And then convert your model to a "BetterTransformer" model before using it: ```diff model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True) + model = model.to_bettertransformer() ``` ### Running Distil-Whisper in `openai-whisper` To use the model in the original Whisper format, first ensure you have the [`openai-whisper`](https://pypi.org/project/openai-whisper/) package installed: ```bash pip install --upgrade openai-whisper ``` The following code-snippet demonstrates how to transcribe a sample file from the LibriSpeech dataset loaded using 🤗 Datasets: ```python import torch from datasets import load_dataset from huggingface_hub import hf_hub_download from whisper import load_model, transcribe distil_large_v2 = hf_hub_download(repo_id="distil-whisper/distil-large-v2", filename="original-model.bin") model = load_model(distil_large_v2) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") sample = dataset[0]["audio"]["array"] sample = torch.from_numpy(sample).float() pred_out = transcribe(model, audio=sample) print(pred_out["text"]) ``` To transcribe a local audio file, simply pass the path to the audio file as the `audio` argument to transcribe: ```python pred_out = transcribe(model, audio="audio.mp3") ``` ### Whisper.cpp Distil-Whisper can be run from the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) repository with the original sequential long-form transcription algorithm. In a [provisional benchmark](https://github.com/ggerganov/whisper.cpp/pull/1424#issuecomment-1793513399) on Mac M1, `distil-large-v2` is 2x faster than `large-v2`, while performing to within 0.1% WER over long-form audio. Note that future releases of Distil-Whisper will target faster CPU inference more! By distilling smaller encoders, we aim to achieve similar speed-ups to what we obtain on GPU. Steps for getting started: 1. Clone the Whisper.cpp repository: ``` git clone https://github.com/ggerganov/whisper.cpp.git cd whisper.cpp ``` 2. Download the ggml weights for `distil-medium.en` from the Hugging Face Hub: ```bash python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='distil-whisper/distil-large-v2', filename='ggml-large-32-2.en.bin', local_dir='./models')" ``` Note that if you do not have the `huggingface_hub` package installed, you can also download the weights with `wget`: ```bash wget https://huggingface.co/distil-whisper/distil-large-v2/resolve/main/ggml-large-32-2.en.bin -P ./models ``` 3. Run inference using the provided sample audio: ```bash make -j && ./main -m models/ggml-large-32-2.en.bin -f samples/jfk.wav ``` ### Transformers.js ```js import { pipeline } from '@xenova/transformers'; let transcriber = await pipeline('automatic-speech-recognition', 'distil-whisper/distil-large-v2'); let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; let output = await transcriber(url); // { text: " And so, my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." } ``` See the [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline) for more information. *Note:* Due to the large model size, we recommend running this model server-side with [Node.js](https://huggingface.co/docs/transformers.js/guides/node-audio-processing) (instead of in-browser). ### Candle Through an integration with Hugging Face [Candle](https://github.com/huggingface/candle/tree/main) 🕯️, Distil-Whisper is now available in the Rust library 🦀 Benefit from: * Optimised CPU backend with optional MKL support for x86 and Accelerate for Macs * CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL * WASM support: run Distil-Whisper in a browser Steps for getting started: 1. Install [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as explained [here](https://huggingface.github.io/candle/guide/installation.html) 2. Clone the `candle` repository locally: ``` git clone https://github.com/huggingface/candle.git ``` 3. Enter the example directory for [Whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper): ``` cd candle/candle-examples/examples/whisper ``` 4. Run an example: ``` cargo run --example whisper --release -- --model distil-large-v2 ``` 5. To specify your own audio file, add the `--input` flag: ``` cargo run --example whisper --release -- --model distil-large-v2 --input audio.wav ``` ### 8bit & 4bit Quantization Coming soon ... ### Whisper.cpp Coming soon ... ## Model Details Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector inputs to a sequence of hidden-state vectors. The decoder auto-regressively predicts text tokens, conditional on all previous tokens and the encoder hidden-states. Consequently, the encoder is only run forward once, whereas the decoder is run as many times as the number of tokens generated. In practice, this means the decoder accounts for over 90% of total inference time. Thus, to optimise for latency, the focus should be on minimising the inference time of the decoder. To distill the Whisper model, we reduce the number of decoder layers while keeping the encoder fixed. The encoder (shown in green) is entirely copied from the teacher to the student and frozen during training. The student's decoder consists of only two decoder layers, which are initialised from the first and last decoder layer of the teacher (shown in red). All other decoder layers of the teacher are discarded. The model is then trained on a weighted sum of the KL divergence and pseudo-label loss terms. <p align="center"> <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/architecture.png?raw=true" width="600"/> </p> ## Evaluation The following code-snippets demonstrates how to evaluate the Distil-Whisper model on the LibriSpeech validation.clean dataset with [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet), meaning no audio data has to be downloaded to your local device. First, we need to install the required packages, including 🤗 Datasets to stream and load the audio data, and 🤗 Evaluate to perform the WER calculation: ```bash pip install --upgrade pip pip install --upgrade transformers datasets[audio] evaluate jiwer ``` Evaluation can then be run end-to-end with the following example: ```python from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor from transformers.models.whisper.english_normalizer import EnglishTextNormalizer from datasets import load_dataset from evaluate import load import torch from tqdm import tqdm # define our torch configuration device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "distil-whisper/distil-large-v2" # load the model + processor model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, use_safetensors=True, low_cpu_mem_usage=True) model = model.to(device) processor = AutoProcessor.from_pretrained(model_id) # load the dataset with streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # define the evaluation metric wer_metric = load("wer") normalizer = EnglishTextNormalizer(processor.tokenizer.english_spelling_normalizer) def inference(batch): # 1. Pre-process the audio data to log-mel spectrogram inputs audio = [sample["array"] for sample in batch["audio"]] input_features = processor(audio, sampling_rate=batch["audio"][0]["sampling_rate"], return_tensors="pt").input_features input_features = input_features.to(device, dtype=torch_dtype) # 2. Auto-regressively generate the predicted token ids pred_ids = model.generate(input_features, max_new_tokens=128, language="en", task="transcribe") # 3. Decode the token ids to the final transcription batch["transcription"] = processor.batch_decode(pred_ids, skip_special_tokens=True) batch["reference"] = batch["text"] return batch dataset = dataset.map(function=inference, batched=True, batch_size=16) all_transcriptions = [] all_references = [] # iterate over the dataset and run inference for i, result in tqdm(enumerate(dataset), desc="Evaluating..."): all_transcriptions.append(result["transcription"]) all_references.append(result["reference"]) # normalize predictions and references all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions] all_references = [normalizer(reference) for reference in all_references] # compute the WER metric wer = 100 * wer_metric.compute(predictions=all_transcriptions, references=all_references) print(wer) ``` **Print Output:** ``` 2.983685535968466 ``` ## Intended Use Distil-Whisper is intended to be a drop-in replacement for Whisper on English speech recognition. In particular, it achieves comparable WER results over out-of-distribution test data, while being 6x faster over both short and long-form audio. ## Data Distil-Whisper is trained on 22,000 hours of audio data from 9 open-source, permissively licensed speech datasets on the Hugging Face Hub: | Dataset | Size / h | Speakers | Domain | Licence | |-----------------------------------------------------------------------------------------|----------|----------|-----------------------------|-----------------| | [People's Speech](https://huggingface.co/datasets/MLCommons/peoples_speech) | 12,000 | unknown | Internet Archive | CC-BY-SA-4.0 | | [Common Voice 13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) | 3,000 | unknown | Narrated Wikipedia | CC0-1.0 | | [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) | 2,500 | unknown | Audiobook, podcast, YouTube | apache-2.0 | | Fisher | 1,960 | 11,900 | Telephone conversations | LDC | | [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) | 960 | 2,480 | Audiobooks | CC-BY-4.0 | | [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | 540 | 1,310 | European Parliament | CC0 | | [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) | 450 | 2,030 | TED talks | CC-BY-NC-ND 3.0 | | SwitchBoard | 260 | 540 | Telephone conversations | LDC | | [AMI](https://huggingface.co/datasets/edinburghcstr/ami) | 100 | unknown | Meetings | CC-BY-4.0 | |||||| | **Total** | 21,770 | 18,260+ | | | The combined dataset spans 10 distinct domains and over 50k speakers. The diversity of this dataset is crucial to ensuring the distilled model is robust to audio distributions and noise. The audio data is then pseudo-labelled using the Whisper large-v2 model: we use Whisper to generate predictions for all the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training. ## WER Filter The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds a specified threshold, we discard the training example. Otherwise, we keep it for training. Section 9.2 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) demonstrates the effectiveness of this filter for improving downstream performance of the distilled model. We also partially attribute Distil-Whisper's robustness to hallucinations to this filter. ## Training The model was trained for 80,000 optimisation steps (or eight epochs). The Tensorboard training logs can be found under: https://huggingface.co/distil-whisper/distil-large-v2/tensorboard?params=scalars#frame ## Results The distilled model performs to within 1% WER of Whisper on out-of-distribution (OOD) short-form audio, and outperforms Whisper by 0.1% on OOD long-form audio. This performance gain is attributed to lower hallucinations. For a detailed per-dataset breakdown of the evaluation results, refer to Tables 16 and 17 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) Distil-Whisper is also evaluated on the [ESB benchmark](https://arxiv.org/abs/2210.13352) datasets as part of the [OpenASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard), where it performs to within 0.2% WER of Whisper. ## Reproducing Distil-Whisper Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training ## License Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model. ## Citation If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430): ``` @misc{gandhi2023distilwhisper, title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling}, author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush}, year={2023}, eprint={2311.00430}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Acknowledgements * OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v2) and [original codebase](https://github.com/openai/whisper) * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration * Google's [TPU Research Cloud (TRC)](https://sites.research.google/trc/about/) programme for Cloud TPU v4s * [`@rsonavane`](https://huggingface.co/rsonavane/distil-whisper-large-v2-8-ls) for releasing an early iteration of Distil-Whisper on the LibriSpeech dataset
nubby/blessed-sdxl-vae-fp16-fix
nubby
"2024-04-06T08:42:08Z"
60,607
6
diffusers
[ "diffusers", "safetensors", "license:openrail++", "region:us" ]
null
"2024-02-16T20:01:54Z"
--- license: openrail++ --- These VAEs are modified versions of [madebyollin](https://huggingface.co/madebyollin)'s [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). These VAEs should not produce a NaN in VAE error even when used on half precision. They have been modified using the ideas from [VAE-BlessUp script](https://github.com/sALTaccount/VAE-BlessUp) to produce higher contrast and lower brightness images than the original version. ## The recommended version is [sdxl-vae-fp16fix-blessed.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-blessed.safetensors) For most SDXL models, you probably should probably just use the non-blessed [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). I made these mostly for fun, but found that slightly increasing contrast and decreasing brightness actually improved the outputs on the model I was testing. You may find one of them to be beneficial for PonyDiffusionV6-XL and other models based on it. Best - [sdxl-vae-fp16fix-blessed.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-blessed.safetensors) = 1.1 contrast multiplier/0.7 brightness multiplier Good - [sdxl_vae-fp16fix-c-1.1-b-0.5.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-1.1-b-0.5.safetensors) = 1.1 contrast multiplier/0.5 brightness multiplier High Contrast - [sdxl_vae-fp16fix-c-1.2-b-0.7.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-1.2-b-0.7.safetensors) = 1.2 contrast multiplier/0.7 brightness multiplier Very High Contrast - [sdxl_vae-fp16fix-c-1.2-b-0.5.safetensors](https://huggingface.co/nubby/kl-f8-anime2-blessed/blob/main/WD1-4-kl-f8-anime2-bless1-1.safetensors) = 1.2 contrast multiplier/0.5 brightness multiplier Untested: [sdxl_vae-fp16fix-c-0.9.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-0.9.safetensors) = 0.9 contrast multiplier [sdxl_vae-fp16fix-c-0.9-b-0.9.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-0.9-b-0.9.safetensors) = 0.9 contrast multiplier/0.9 brightness multiplier [sdxl_vae-fp16fix-c-0.8.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-0.8.safetensors) = 0.8 contrast multiplier [sdxl_vae-fp16fix-c-0.8-b-0.9.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-0.8-b-0.9.safetensors) = 0.8 contrast multiplier/0.9 brightness multiplier [sdxl_vae-fp16fix-c-0.8-b-0.8.safetensors](https://huggingface.co/nubby/blessed-sdxl-vae-fp16-fix/blob/main/sdxl_vae-fp16fix-c-0.8-b-0.8.safetensors) = 0.8 contrast multiplier/0.8 brightness multiplier ## Example images (made using AutismMix_confetti): ![](./Examples/ComfyUI_temp_ldfob_00001_.png) Thank you Neggles for the script used to make them!
MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
MoritzLaurer
"2024-04-11T13:49:19Z"
60,569
267
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "nli", "multilingual", "zh", "ja", "ar", "ko", "de", "fr", "es", "pt", "hi", "id", "it", "tr", "ru", "bn", "ur", "mr", "ta", "vi", "fa", "pl", "uk", "nl", "sv", "he", "sw", "ps", "dataset:MoritzLaurer/multilingual-NLI-26lang-2mil7", "dataset:xnli", "dataset:multi_nli", "dataset:facebook/anli", "dataset:fever", "dataset:lingnli", "dataset:alisawuffles/WANLI", "arxiv:2111.09543", "arxiv:2104.07179", "arxiv:1809.05053", "arxiv:1911.02116", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-08-22T16:59:35Z"
--- language: - multilingual - zh - ja - ar - ko - de - fr - es - pt - hi - id - it - tr - ru - bn - ur - mr - ta - vi - fa - pl - uk - nl - sv - he - sw - ps license: mit tags: - zero-shot-classification - text-classification - nli - pytorch datasets: - MoritzLaurer/multilingual-NLI-26lang-2mil7 - xnli - multi_nli - facebook/anli - fever - lingnli - alisawuffles/WANLI metrics: - accuracy pipeline_tag: zero-shot-classification widget: - text: Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU candidate_labels: politics, economy, entertainment, environment model-index: - name: DeBERTa-v3-base-xnli-multilingual-nli-2mil7 results: - task: type: text-classification name: Natural Language Inference dataset: name: MultiNLI-matched type: multi_nli split: validation_matched metrics: - type: accuracy value: 0,857 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: MultiNLI-mismatched type: multi_nli split: validation_mismatched metrics: - type: accuracy value: 0,856 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: ANLI-all type: anli split: test_r1+test_r2+test_r3 metrics: - type: accuracy value: 0,537 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: ANLI-r3 type: anli split: test_r3 metrics: - type: accuracy value: 0,497 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: WANLI type: alisawuffles/WANLI split: test metrics: - type: accuracy value: 0,732 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: LingNLI type: lingnli split: test metrics: - type: accuracy value: 0,788 verified: false - task: type: text-classification name: Natural Language Inference dataset: name: fever-nli type: fever-nli split: test metrics: - type: accuracy value: 0,761 verified: false --- # Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying mDeBERTa-v3-base model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100) with 100 languages. The model was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli) and on the [multilingual-NLI-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Both datasets contain more than 2.7 million hypothesis-premise pairs in 27 languages spoken by more than 4 billion people. As of December 2021, mDeBERTa-v3-base is the best performing multilingual base-sized transformer model introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli") sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the [multilingual-nli-26lang-2mil7 dataset](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) and the [XNLI](https://huggingface.co/datasets/xnli) validation dataset. The multilingual-nli-26lang-2mil7 dataset contains 2 730 000 NLI hypothesis-premise pairs in 26 languages spoken by more than 4 billion people. The dataset contains 105 000 text pairs per language. It is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models. The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). For more details, see the [datasheet](XXX). In addition, a sample of 105 000 text pairs was also added for English following the same sampling method as the other languages, leading to 27 languages. Moreover, for each language a random set of 10% of the hypothesis-premise pairs was added where an English hypothesis was paired with the premise in the other language (and the same for English premises and other language hypotheses). This mix of languages in the text pairs should enable users to formulate a hypothesis in English for a target text in another language. The [XNLI](https://huggingface.co/datasets/xnli) validation set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that XNLI also contains a training set of 14 machine translated versions of the MultiNLI dataset for 14 languages, but this data was excluded due to quality issues with the machine translations from 2018. Note that for evaluation purposes, three languages were excluded from the XNLI training data and only included in the test data: ["bg","el","th"]. This was done in order to test the performance of the model on languages it has not seen during NLI fine-tuning on 27 languages, but only during pre-training on 100 languages - see evaluation metrics below. The total training dataset had a size of 3 287 280 hypothesis-premise pairs. ### Training procedure mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training gradient_accumulation_steps=2, # to double the effective batch size for warmup_ratio=0.06, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay fp16=False ) ``` ### Eval results The model was evaluated on the XNLI test set in 15 languages (5010 texts per language, 75150 in total) and the English test sets of [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) . Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able to do NLI on the other 73 languages mDeBERTa was pre-trained on, but performance is most likely lower than for those languages seen during NLI fine-tuning. The performance on the languages ["bg","el","th"] in the table below is a good indicated of this cross-lingual transfer, as these languages were not included in the training data. |XNLI subsets|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh| | :---: |:---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |Accuracy|0.794|0.822|0.824|0.809|0.871|0.832|0.823|0.769|0.803|0.746|0.786|0.792|0.744|0.793|0.803| |Speed (text/sec, A100-GPU)|1344.0|1355.0|1472.0|1149.0|1697.0|1446.0|1278.0|1115.0|1380.0|1463.0|1713.0|1594.0|1189.0|877.0|1887.0| |English Datasets|mnli_test_m|mnli_test_mm|anli_test|anli_test_r3|fever_test|ling_test|wanli_test| | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | |Accuracy|0.857|0.856|0.537|0.497|0.761|0.788|0.732|0.794| |Speed (text/sec, A100-GPU)|1000.0|1009.0|794.0|672.0|374.0|1177.0|1468.0| Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). ## Limitations and bias Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. Moreover, note that the multilingual-nli-26lang-2mil7 dataset was created using machine translation, which reduces the quality of the data for a complex task like NLI. You can inspect the data via the Hugging Face [dataset viewer](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) for languages you are interested in. Note that grammatical errors introduced by machine translation are less of an issue for zero-shot classification, for which grammar is less important. ## Citation If the dataset is useful for you, please cite the following article: ``` @article{laurer_less_2022, title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}}, url = {https://osf.io/74b8k}, language = {en-us}, urldate = {2022-07-28}, journal = {Preprint}, author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper}, month = jun, year = {2022}, note = {Publisher: Open Science Framework}, } ``` ## Ideas for cooperation or questions? For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer). If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ## Debugging and issues Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
xinsir/controlnet-union-sdxl-1.0
xinsir
"2024-07-30T09:01:56Z"
60,377
965
diffusers
[ "diffusers", "safetensors", "Text-to-Image", "ControlNet", "Diffusers", "Stable Diffusion", "text-to-image", "license:apache-2.0", "region:us" ]
text-to-image
"2024-07-07T07:41:40Z"
--- license: apache-2.0 tags: - Text-to-Image - ControlNet - Diffusers - Stable Diffusion pipeline_tag: text-to-image --- # **ControlNet++: All-in-one ControlNet for image generations and editing!** ## **ProMax Model has released!! 12 control + 5 advanced editing, just try it!!!** ![images_display](./images/masonry.webp) ## Network Arichitecture ![images](./images/ControlNet++.png) ## Advantages about the model - Use bucket training like novelai, can generate high resolutions images of any aspect ratio - Use large amount of high quality data(over 10000000 images), the dataset covers a diversity of situation - Use re-captioned prompt like DALLE.3, use CogVLM to generate detailed description, good prompt following ability - Use many useful tricks during training. Including but not limited to date augmentation, mutiple loss, multi resolution - Use almost the same parameter compared with original ControlNet. No obvious increase in network parameter or computation. - Support 10+ control conditions, no obvious performance drop on any single condition compared with training independently - Support multi condition generation, condition fusion is learned during training. No need to set hyperparameter or design prompts. - Compatible with other opensource SDXL models, such as BluePencilXL, CounterfeitXL. Compatible with other Lora models. ***We design a new architecture that can support 10+ control types in condition text-to-image generation and can generate high resolution images visually comparable with midjourney***. The network is based on the original ControlNet architecture, we propose two new modules to: 1 Extend the original ControlNet to support different image conditions using the same network parameter. 2 Support multiple conditions input without increasing computation offload, which is especially important for designers who want to edit image in detail, different conditions use the same condition encoder, without adding extra computations or parameters. We do thoroughly experiments on SDXL and achieve superior performance both in control ability and aesthetic score. We release the method and the model to the open source community to make everyone can enjoy it. Inference scripts and more details can found: https://github.com/xinsir6/ControlNetPlus/tree/main **If you find it useful, please give me a star, thank you very much** **SDXL ProMax version has been released!!!,Enjoy it!!!** **I am sorry that because of the project's revenue and expenditure are difficult to balance, the GPU resources are assigned to other projects that are more likely to be profitable, the SD3 trainging is stopped until I find enough GPU supprt, I will try my best to find GPUs to continue training. If this brings you inconvenience, I sincerely apologize for that. I want to thank everyone who likes this project, your support is what keeps me going** Note: we put the promax model with a promax suffix in the same [huggingface model repo](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0), detailed instructions will be added later. ## Advanced editing features in Promax Model ### Tile Deblur ![blur0](./images/100000_tile_blur_concat.webp) ![blur1](./images/100001_tile_blur_concat.webp) ![blur2](./images/100002_tile_blur_concat.webp) ![blur3](./images/100003_tile_blur_concat.webp) ![blur4](./images/100004_tile_blur_concat.webp) ![blur5](./images/100005_tile_blur_concat.webp) ### Tile variation ![var0](./images/100006_tile_var_concat.webp) ![var1](./images/100007_tile_var_concat.webp) ![var2](./images/100008_tile_var_concat.webp) ![var3](./images/100009_tile_var_concat.webp) ![var4](./images/100010_tile_var_concat.webp) ![var5](./images/100011_tile_var_concat.webp) ### Tile Super Resolution Following example show from 1M resolution --> 9M resolution <div style="display: flex; justify-content: space-between;"> <img src="./images/tile_super1.webp" alt="Image 1" style="width: 49%; margin: 1%;"> <img src="./images/tile_super1_9upscale.webp" alt="Image 2" style="width: 49%; margin: 1%;"> </div> <div style="display: flex; justify-content: space-between;"> <img src="./images/tile_super2.webp" alt="Image 1" style="width: 49%; margin: 1%;"> <img src="./images/tile_super2_9upscale.webp" alt="Image 2" style="width: 49%; margin: 1%;"> </div> ### Image Inpainting ![inp0](./images/100018_inpainting_concat.webp) ![inp1](./images/100019_inpainting_concat.webp) ![inp2](./images/100020_inpainting_concat.webp) ![inp3](./images/100021_inpainting_concat.webp) ![inp4](./images/100022_inpainting_concat.webp) ![inp5](./images/100023_inpainting_concat.webp) ### Image Outpainting ![oup0](./images/100012_outpainting_concat.webp) ![oup1](./images/100013_outpainting_concat.webp) ![oup2](./images/100014_outpainting_concat.webp) ![oup3](./images/100015_outpainting_concat.webp) ![oup4](./images/100016_outpainting_concat.webp) ![oup5](./images/100017_outpainting_concat.webp) ## Visual Examples ### Openpose ![pose0](./images/000000_pose_concat.webp) ![pose1](./images/000001_pose_concat.webp) ![pose2](./images/000002_pose_concat.webp) ![pose3](./images/000003_pose_concat.webp) ![pose4](./images/000004_pose_concat.webp) ### Depth ![depth0](./images/000005_depth_concat.webp) ![depth1](./images/000006_depth_concat.webp) ![depth2](./images/000007_depth_concat.webp) ![depth3](./images/000008_depth_concat.webp) ![depth4](./images/000009_depth_concat.webp) ### Canny ![canny0](./images/000010_canny_concat.webp) ![canny1](./images/000011_canny_concat.webp) ![canny2](./images/000012_canny_concat.webp) ![canny3](./images/000013_canny_concat.webp) ![canny4](./images/000014_canny_concat.webp) ### Lineart ![lineart0](./images/000015_lineart_concat.webp) ![lineart1](./images/000016_lineart_concat.webp) ![lineart2](./images/000017_lineart_concat.webp) ![lineart3](./images/000018_lineart_concat.webp) ![lineart4](./images/000019_lineart_concat.webp) ### AnimeLineart ![animelineart0](./images/000020_anime_lineart_concat.webp) ![animelineart1](./images/000021_anime_lineart_concat.webp) ![animelineart2](./images/000022_anime_lineart_concat.webp) ![animelineart3](./images/000023_anime_lineart_concat.webp) ![animelineart4](./images/000024_anime_lineart_concat.webp) ### Mlsd ![mlsd0](./images/000025_mlsd_concat.webp) ![mlsd1](./images/000026_mlsd_concat.webp) ![mlsd2](./images/000027_mlsd_concat.webp) ![mlsd3](./images/000028_mlsd_concat.webp) ![mlsd4](./images/000029_mlsd_concat.webp) ### Scribble ![scribble0](./images/000030_scribble_concat.webp) ![scribble1](./images/000031_scribble_concat.webp) ![scribble2](./images/000032_scribble_concat.webp) ![scribble3](./images/000033_scribble_concat.webp) ![scribble4](./images/000034_scribble_concat.webp) ### Hed ![hed0](./images/000035_hed_concat.webp) ![hed1](./images/000036_hed_concat.webp) ![hed2](./images/000037_hed_concat.webp) ![hed3](./images/000038_hed_concat.webp) ![hed4](./images/000039_hed_concat.webp) ### Pidi(Softedge) ![pidi0](./images/000040_softedge_concat.webp) ![pidi1](./images/000041_softedge_concat.webp) ![pidi2](./images/000042_softedge_concat.webp) ![pidi3](./images/000043_softedge_concat.webp) ![pidi4](./images/000044_softedge_concat.webp) ### Teed ![ted0](./images/000045_ted_concat.webp) ![ted1](./images/000046_ted_concat.webp) ![ted2](./images/000047_ted_concat.webp) ![ted3](./images/000048_ted_concat.webp) ![ted4](./images/000049_ted_concat.webp) ### Segment ![segment0](./images/000050_seg_concat.webp) ![segment1](./images/000051_seg_concat.webp) ![segment2](./images/000052_seg_concat.webp) ![segment3](./images/000053_seg_concat.webp) ![segment4](./images/000054_seg_concat.webp) ### Normal ![normal0](./images/000055_normal_concat.webp) ![normal1](./images/000056_normal_concat.webp) ![normal2](./images/000057_normal_concat.webp) ![normal3](./images/000058_normal_concat.webp) ![normal4](./images/000059_normal_concat.webp) ## Multi Control Visual Examples ### Openpose + Canny ![pose_canny0](./images/000007_openpose_canny_concat.webp) ![pose_canny1](./images/000008_openpose_canny_concat.webp) ![pose_canny2](./images/000009_openpose_canny_concat.webp) ![pose_canny3](./images/000010_openpose_canny_concat.webp) ![pose_canny4](./images/000011_openpose_canny_concat.webp) ![pose_canny5](./images/000012_openpose_canny_concat.webp) ### Openpose + Depth ![pose_depth0](./images/000013_openpose_depth_concat.webp) ![pose_depth1](./images/000014_openpose_depth_concat.webp) ![pose_depth2](./images/000015_openpose_depth_concat.webp) ![pose_depth3](./images/000016_openpose_depth_concat.webp) ![pose_depth4](./images/000017_openpose_depth_concat.webp) ![pose_depth5](./images/000018_openpose_depth_concat.webp) ### Openpose + Scribble ![pose_scribble0](./images/000001_openpose_scribble_concat.webp) ![pose_scribble1](./images/000002_openpose_scribble_concat.webp) ![pose_scribble2](./images/000003_openpose_scribble_concat.webp) ![pose_scribble3](./images/000004_openpose_scribble_concat.webp) ![pose_scribble4](./images/000005_openpose_scribble_concat.webp) ![pose_scribble5](./images/000006_openpose_scribble_concat.webp) ### Openpose + Normal ![pose_normal0](./images/000019_openpose_normal_concat.webp) ![pose_normal1](./images/000020_openpose_normal_concat.webp) ![pose_normal2](./images/000021_openpose_normal_concat.webp) ![pose_normal3](./images/000022_openpose_normal_concat.webp) ![pose_normal4](./images/000023_openpose_normal_concat.webp) ![pose_normal5](./images/000024_openpose_normal_concat.webp) ### Openpose + Segment ![pose_segment0](./images/000025_openpose_sam_concat.webp) ![pose_segment1](./images/000026_openpose_sam_concat.webp) ![pose_segment2](./images/000027_openpose_sam_concat.webp) ![pose_segment3](./images/000028_openpose_sam_concat.webp) ![pose_segment4](./images/000029_openpose_sam_concat.webp) ![pose_segment5](./images/000030_openpose_sam_concat.webp)
optimum/all-MiniLM-L6-v2
optimum
"2022-03-24T16:16:57Z"
60,341
14
sentence-transformers
[ "sentence-transformers", "onnx", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-24T16:15:58Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 --- # ONNX convert all-MiniLM-L6-v2 ## Conversion of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
textattack/bert-base-uncased-SST-2
textattack
"2021-05-20T07:37:12Z"
60,304
2
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
Entry not found
PulseWave/ACCOUNT-OWNERSHIP
PulseWave
"2024-03-01T19:16:49Z"
60,298
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
text-classification
"2024-03-01T19:13:59Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: [] pipeline_tag: text-classification inference: true --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.7 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.37.2 - PyTorch: 2.2.0 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
sentence-transformers/msmarco-MiniLM-L6-cos-v5
sentence-transformers
"2024-03-27T11:20:58Z"
60,140
8
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "arxiv:1908.10084", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - en library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # msmarco-MiniLM-L6-cos-v5 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L6-cos-v5') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take average of all tokens def mean_pooling(model_output, attention_mask): token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5") model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Technical Details In the following some technical details how this model must be used: | Setting | Value | | --- | :---: | | Dimensions | 384 | | Produces normalized embeddings | Yes | | Pooling-Method | Mean pooling | | Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance | Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used. ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
ElnaggarLab/ankh-base
ElnaggarLab
"2023-12-18T12:54:09Z"
60,103
11
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "biology", "protein", "protein language model", "protein embedding", "doi:10.57967/hf/0276", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-08-16T19:30:03Z"
--- license: cc-by-nc-sa-4.0 tags: - biology - protein - protein language model - protein embedding ---
TheBloke/Mistral-7B-v0.1-GPTQ
TheBloke
"2023-09-29T20:49:41Z"
59,909
35
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pretrained", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-09-28T22:35:40Z"
--- base_model: mistralai/Mistral-7B-v0.1 inference: false license: apache-2.0 model_creator: Mistral AI model_name: Mistral 7B v0.1 model_type: mistral pipeline_tag: text-generation prompt_template: '{prompt}' quantized_by: TheBloke tags: - pretrained --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B v0.1 - GPTQ - Model creator: [Mistral AI](https://huggingface.co/mistralai) - Original model: [Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) <!-- description start --> ## Description This repo contains GPTQ model files for [Mistral AI's Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. ### GPTQs will work in ExLlama, or via Transformers (requiring Transformers from Github) These models are confirmed to work with ExLlama v1. At the time of writing (September 28th), AutoGPTQ has not yet added support for the new Mistral models. These GPTQs were made directly from Transformers, and so can be loaded via the Transformers interface. They can't be loaded directly from AutoGPTQ. To load them via Transformers, you will need to install Transformers from Github, with: ``` pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79 ``` <!-- description end --> <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF) * [Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. These files were made with Transformers 4.34.0.dev0, from commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | Yes | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.57 GB | Yes | 8-bit, with group size 32g and Act Order for maximum inference quality. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mistral-7B-v0.1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mistral-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mistral-7B-v0.1-GPTQ`: ```shell mkdir Mistral-7B-v0.1-GPTQ huggingface-cli download TheBloke/Mistral-7B-v0.1-GPTQ --local-dir Mistral-7B-v0.1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistral-7B-v0.1-GPTQ huggingface-cli download TheBloke/Mistral-7B-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Mistral-7B-v0.1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mistral-7B-v0.1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-v0.1-GPTQ --local-dir Mistral-7B-v0.1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). These models are confirmed to work via the ExLlama Loader in text-generation-webui. Use **Loader: ExLlama** - or Transformers may work too. AutoGPTQ will not work. Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mistral-7B-v0.1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mistral-7B-v0.1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mistral-7B-v0.1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.34.0.dev0 from Github source, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install optimum pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mistral-7B-v0.1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are only tested to work with Transformers 4.34.0.dev0 as of commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mistral AI's Mistral 7B v0.1 # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral' ``` Installing transformers from source should solve the issue: ``` pip install git+https://github.com/huggingface/transformers ``` This should not be required after transformers-v4.33.4. ## Notice Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
Alibaba-NLP/gte-multilingual-base
Alibaba-NLP
"2024-08-17T06:58:59Z"
59,850
69
sentence-transformers
[ "sentence-transformers", "safetensors", "new", "feature-extraction", "mteb", "transformers", "multilingual", "sentence-similarity", "custom_code", "af", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fr", "gl", "gu", "he", "hi", "hr", "ht", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ky", "lo", "lt", "lv", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "pa", "pl", "pt", "qu", "ro", "ru", "si", "sk", "sl", "so", "sq", "sr", "sv", "sw", "ta", "te", "th", "tl", "tr", "uk", "ur", "vi", "yo", "zh", "arxiv:2407.19669", "arxiv:2210.09984", "arxiv:2402.03216", "arxiv:2007.15207", "arxiv:2104.08663", "arxiv:2402.07440", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-07-20T08:37:28Z"
--- tags: - mteb - sentence-transformers - transformers - multilingual - sentence-similarity license: apache-2.0 language: - af - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - es - et - eu - fa - fi - fr - gl - gu - he - hi - hr - ht - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ky - lo - lt - lv - mk - ml - mn - mr - ms - my - ne - nl - 'no' - pa - pl - pt - qu - ro - ru - si - sk - sl - so - sq - sr - sv - sw - ta - te - th - tl - tr - uk - ur - vi - yo - zh model-index: - name: gte-multilingual-base (dense) results: - task: type: Clustering dataset: type: PL-MTEB/8tags-clustering name: MTEB 8TagsClustering config: default split: test revision: None metrics: - type: v_measure value: 33.66681726329994 - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_spearman value: 43.54760696384009 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_spearman value: 48.91186363417501 - task: type: Classification dataset: type: PL-MTEB/allegro-reviews name: MTEB AllegroReviews config: default split: test revision: None metrics: - type: accuracy value: 41.689860834990064 - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringP2P config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 54.20241337977897 - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringS2S config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 44.34083695608643 - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-alloprof-s2p name: MTEB AlloprofReranking config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 64.91495250072002 - task: type: Retrieval dataset: type: lyon-nlp/alloprof name: MTEB AlloprofRetrieval config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: ndcg_at_10 value: 53.638 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.95522388059702 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 80.717625 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.64199999999999 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.108 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.169999999999995 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 39.56799999999999 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (ja) config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 35.75000000000001 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 33.342000000000006 - task: type: Retrieval dataset: type: mteb/arguana name: MTEB ArguAna config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: ndcg_at_10 value: 58.231 - task: type: Retrieval dataset: type: clarin-knext/arguana-pl name: MTEB ArguAna-PL config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: ndcg_at_10 value: 53.166000000000004 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.01900557959478 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 41.06626465345723 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.87514497610431 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 81.21450112991194 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_spearman value: 51.71589543397271 - task: type: Retrieval dataset: type: maastrichtlawtech/bsard name: MTEB BSARDRetrieval config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: ndcg_at_10 value: 26.115 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: f1 value: 98.6169102296451 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (fr-en) config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: f1 value: 97.89603052314916 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (ru-en) config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: f1 value: 97.12388869645537 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: f1 value: 98.15692469720906 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.36038961038962 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.5903826674123 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.21474277151329 - task: type: Classification dataset: type: PL-MTEB/cbd name: MTEB CBD config: default split: test revision: None metrics: - type: accuracy value: 62.519999999999996 - task: type: PairClassification dataset: type: PL-MTEB/cdsce-pairclassification name: MTEB CDSC-E config: default split: test revision: None metrics: - type: cos_sim_ap value: 74.90132799162956 - task: type: STS dataset: type: PL-MTEB/cdscr-sts name: MTEB CDSC-R config: default split: test revision: None metrics: - type: cos_sim_spearman value: 90.30727955142524 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 37.94850105022274 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 38.11958675421534 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 86.10950950485399 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 87.28038294231966 - task: type: Retrieval dataset: type: mteb/cqadupstack-android name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: ndcg_at_10 value: 47.099000000000004 - task: type: Retrieval dataset: type: mteb/cqadupstack-english name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: ndcg_at_10 value: 45.973000000000006 - task: type: Retrieval dataset: type: mteb/cqadupstack-gaming name: MTEB CQADupstackGamingRetrieval config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: ndcg_at_10 value: 55.606 - task: type: Retrieval dataset: type: mteb/cqadupstack-gis name: MTEB CQADupstackGisRetrieval config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: ndcg_at_10 value: 36.638 - task: type: Retrieval dataset: type: mteb/cqadupstack-mathematica name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: ndcg_at_10 value: 30.711 - task: type: Retrieval dataset: type: mteb/cqadupstack-physics name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: ndcg_at_10 value: 44.523 - task: type: Retrieval dataset: type: mteb/cqadupstack-programmers name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: ndcg_at_10 value: 37.940000000000005 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: ndcg_at_10 value: 38.12183333333333 - task: type: Retrieval dataset: type: mteb/cqadupstack-stats name: MTEB CQADupstackStatsRetrieval config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: ndcg_at_10 value: 32.684000000000005 - task: type: Retrieval dataset: type: mteb/cqadupstack-tex name: MTEB CQADupstackTexRetrieval config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: ndcg_at_10 value: 26.735 - task: type: Retrieval dataset: type: mteb/cqadupstack-unix name: MTEB CQADupstackUnixRetrieval config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: ndcg_at_10 value: 36.933 - task: type: Retrieval dataset: type: mteb/cqadupstack-webmasters name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: ndcg_at_10 value: 33.747 - task: type: Retrieval dataset: type: mteb/cqadupstack-wordpress name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: ndcg_at_10 value: 28.872999999999998 - task: type: Retrieval dataset: type: mteb/climate-fever name: MTEB ClimateFEVER config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: ndcg_at_10 value: 34.833 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: ndcg_at_10 value: 43.78 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_ap value: 84.00640599186677 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: ndcg_at_10 value: 80.60000000000001 - task: type: Retrieval dataset: type: mteb/dbpedia name: MTEB DBPedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: ndcg_at_10 value: 40.116 - task: type: Retrieval dataset: type: clarin-knext/dbpedia-pl name: MTEB DBPedia-PL config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: ndcg_at_10 value: 32.498 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: ndcg_at_10 value: 87.547 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: ndcg_at_10 value: 64.85 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.949999999999996 - task: type: Retrieval dataset: type: mteb/fever name: MTEB FEVER config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: ndcg_at_10 value: 92.111 - task: type: Retrieval dataset: type: clarin-knext/fiqa-pl name: MTEB FiQA-PL config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: ndcg_at_10 value: 28.962 - task: type: Retrieval dataset: type: mteb/fiqa name: MTEB FiQA2018 config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: ndcg_at_10 value: 45.005 - task: type: Clustering dataset: type: lyon-nlp/clustering-hal-s2s name: MTEB HALClusteringS2S config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 25.133776435657595 - task: type: Retrieval dataset: type: mteb/hotpotqa name: MTEB HotpotQA config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: ndcg_at_10 value: 63.036 - task: type: Retrieval dataset: type: clarin-knext/hotpotqa-pl name: MTEB HotpotQA-PL config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: ndcg_at_10 value: 56.904999999999994 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 44.59407464409388 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 74.912 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 79.26829268292683 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_spearman value: 74.8601229809791 - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringP2P config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 42.331902754246556 - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringS2S config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 40.92029335502153 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6 metrics: - type: map value: 32.19266316591337 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: ndcg_at_10 value: 79.346 - task: type: Retrieval dataset: type: mteb/msmarco name: MTEB MSMARCO config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: ndcg_at_10 value: 39.922999999999995 - task: type: Retrieval dataset: type: clarin-knext/msmarco-pl name: MTEB MSMARCO-PL config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: ndcg_at_10 value: 55.620999999999995 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.53989968080255 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.26993519301212 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.87725150100067 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.48512370811149 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (hi) config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.45141627823591 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (th) config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 83.45750452079565 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.57637938896488 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.50803043110736 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.6577718478986 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 64.05887879736925 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (hi) config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.27070634636071 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (th) config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.04520795660037 - task: type: Classification dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClassification (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 80.66350710900474 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringP2P (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 44.016506455899425 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringS2S (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 40.67730129573544 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (af) config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.94552790854068 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (am) config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 49.273705447209146 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ar) config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.490921318090116 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (az) config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.97511768661733 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (bn) config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.5689307330195 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (cy) config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.34902488231337 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (da) config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.6684599865501 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.54539340954942 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (el) config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.08675184936112 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.12508406186953 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.41425689307331 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fa) config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.59515803631474 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fi) config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.90517821116342 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.91526563550774 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (he) config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.198386012104905 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hi) config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.04371217215869 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hu) config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.31203765971756 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hy) config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.521183591123055 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (id) config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.06254203093476 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (is) config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.01546738399461 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.27975790181574 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ja) config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.79556153328849 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (jv) config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.18493611297915 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ka) config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.888365837256224 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (km) config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.79690652320108 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (kn) config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.225958305312716 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ko) config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.58641560188299 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (lv) config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.08204438466711 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ml) config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.54606590450572 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (mn) config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 53.443174176193665 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ms) config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.65097511768661 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (my) config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 53.45662407531944 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nb) config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.739071956960316 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nl) config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.36180228648286 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pl) config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.3920645595158 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pt) config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.06993947545395 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ro) config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.123739071956955 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ru) config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.46133154001346 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sl) config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.54472091459314 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sq) config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.204438466711494 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sv) config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.69603227975792 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sw) config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.684599865501 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ta) config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.523873570948226 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (te) config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.53396099529253 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (th) config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.88298587760591 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tl) config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.65097511768662 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tr) config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.8453261600538 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ur) config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.6247478143914 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (vi) config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.16274377942166 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.61667787491594 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-TW) config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.17283120376598 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (af) config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.89912575655683 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (am) config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 57.27975790181573 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ar) config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.269670477471415 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (az) config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.10423671822461 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (bn) config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.40753194351043 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (cy) config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 55.369872225958304 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (da) config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.60726294552792 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.30262273032952 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (el) config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.52925353059851 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.28446536650976 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.45460659045058 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fa) config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.26563550773368 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fi) config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.20578345662408 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.64963012777405 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (he) config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.698049764626774 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hi) config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.14458641560188 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hu) config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.51445864156018 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hy) config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.13786146603901 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (id) config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.61533288500337 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (is) config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.526563550773375 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.99731002017484 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ja) config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.59381304640216 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (jv) config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 57.010759919300604 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ka) config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 53.26160053799597 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (km) config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 57.800941492938804 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (kn) config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.387357094821795 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ko) config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.5359784801614 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (lv) config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.36919973100203 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ml) config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.81506388702084 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (mn) config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.35104236718225 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ms) config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.67787491593813 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (my) config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.4250168123739 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nb) config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.49630127774043 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nl) config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.95696032279758 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pl) config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.11768661735036 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pt) config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ro) config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.51042367182247 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ru) config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.65097511768661 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sl) config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.81573638197713 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sq) config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.26227303295225 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sv) config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.51513113651646 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sw) config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.29858776059179 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ta) config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.72696704774714 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (te) config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.57700067249496 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (th) config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.22797579018157 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tl) config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.97041022192333 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tr) config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.72629455279085 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ur) config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.16072629455278 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (vi) config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.92199058507062 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.40484196368527 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-TW) config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.61398789509079 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: ndcg_at_10 value: 61.934999999999995 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.052031054565205 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.969909524076794 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.7530992892652 - task: type: Retrieval dataset: type: jinaai/mintakaqa name: MTEB MintakaRetrieval (fr) config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: ndcg_at_10 value: 34.705999999999996 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (ar) config: ar split: test revision: None metrics: - type: ndcg_at_10 value: 55.166000000000004 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (de) config: de split: test revision: None metrics: - type: ndcg_at_10 value: 55.155 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (en) config: en split: test revision: None metrics: - type: ndcg_at_10 value: 50.993 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (es) config: es split: test revision: None metrics: - type: ndcg_at_10 value: 81.228 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (fr) config: fr split: test revision: None metrics: - type: ndcg_at_10 value: 76.19 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (hi) config: hi split: test revision: None metrics: - type: ndcg_at_10 value: 45.206 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (it) config: it split: test revision: None metrics: - type: ndcg_at_10 value: 66.741 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (ja) config: ja split: test revision: None metrics: - type: ndcg_at_10 value: 52.111 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (ko) config: ko split: test revision: None metrics: - type: ndcg_at_10 value: 46.733000000000004 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (pt) config: pt split: test revision: None metrics: - type: ndcg_at_10 value: 79.105 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (ru) config: ru split: test revision: None metrics: - type: ndcg_at_10 value: 64.21 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (th) config: th split: test revision: None metrics: - type: ndcg_at_10 value: 35.467 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (zh) config: zh split: test revision: None metrics: - type: ndcg_at_10 value: 27.419 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 61.02000000000001 - task: type: Retrieval dataset: type: mteb/nfcorpus name: MTEB NFCorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: ndcg_at_10 value: 36.65 - task: type: Retrieval dataset: type: clarin-knext/nfcorpus-pl name: MTEB NFCorpus-PL config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: ndcg_at_10 value: 26.831 - task: type: Retrieval dataset: type: mteb/nq name: MTEB NQ config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: ndcg_at_10 value: 58.111000000000004 - task: type: Retrieval dataset: type: clarin-knext/nq-pl name: MTEB NQ-PL config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: ndcg_at_10 value: 43.126999999999995 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_ap value: 72.67630697316041 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 84.85000000000001 - task: type: PairClassification dataset: type: GEM/opusparcus name: MTEB OpusparcusPC (fr) config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_ap value: 100 - task: type: Classification dataset: type: laugustyniak/abusive-clauses-pl name: MTEB PAC config: default split: test revision: None metrics: - type: accuracy value: 65.99189110918043 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_spearman value: 16.124364530596228 - task: type: PairClassification dataset: type: PL-MTEB/ppc-pairclassification name: MTEB PPC config: default split: test revision: None metrics: - type: cos_sim_ap value: 92.43431057460192 - task: type: PairClassification dataset: type: PL-MTEB/psc-pairclassification name: MTEB PSC config: default split: test revision: None metrics: - type: cos_sim_ap value: 99.06090138049724 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (fr) config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_ap value: 58.9314954874314 - task: type: Classification dataset: type: PL-MTEB/polemo2_in name: MTEB PolEmo2.0-IN config: default split: test revision: None metrics: - type: accuracy value: 69.59833795013851 - task: type: Classification dataset: type: PL-MTEB/polemo2_out name: MTEB PolEmo2.0-OUT config: default split: test revision: None metrics: - type: accuracy value: 44.73684210526315 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_spearman value: 39.36450754137984 - task: type: Retrieval dataset: type: clarin-knext/quora-pl name: MTEB Quora-PL config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: ndcg_at_10 value: 80.76299999999999 - task: type: Retrieval dataset: type: mteb/quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: ndcg_at_10 value: 88.022 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.719165988934385 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.25390069273025 - task: type: Retrieval dataset: type: mteb/scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: ndcg_at_10 value: 18.243000000000002 - task: type: Retrieval dataset: type: clarin-knext/scidocs-pl name: MTEB SCIDOCS-PL config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: ndcg_at_10 value: 14.219000000000001 - task: type: PairClassification dataset: type: PL-MTEB/sicke-pl-pairclassification name: MTEB SICK-E-PL config: default split: test revision: None metrics: - type: cos_sim_ap value: 75.4022630307816 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 79.34269390198548 - task: type: STS dataset: type: PL-MTEB/sickr-pl-sts name: MTEB SICK-R-PL config: default split: test revision: None metrics: - type: cos_sim_spearman value: 74.0651660446132 - task: type: STS dataset: type: Lajavaness/SICK-fr name: MTEB SICKFr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_spearman value: 78.62693119733123 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 77.50660544631359 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 85.55415077723738 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.67550814479077 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.94601412322764 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 84.33844259337481 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ko-ko) config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 81.58650681159105 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ar-ar) config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 78.82472265884256 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-ar) config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 76.43637938260397 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-de) config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 84.71008299464059 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 88.88074713413747 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-tr) config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 76.36405640457285 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-en) config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 83.84737910084762 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-es) config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 87.03931621433031 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (fr-en) config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 84.43335591752246 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (it-en) config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 83.85268648747021 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (nl-en) config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 82.45786516224341 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 67.20227303970304 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de) config: de split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 60.892838305537126 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es) config: es split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 72.01876318464508 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl) config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 42.3879320510127 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (tr) config: tr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 65.54048784845729 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ar) config: ar split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 58.55244068334867 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ru) config: ru split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 66.48710288440624 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 66.585754901838 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 81.03001290557805 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-en) config: de-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 62.28001859884359 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-en) config: es-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 79.64106342105019 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 78.27915339361124 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl-en) config: pl-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 78.28574268257462 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh-en) config: zh-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 72.92658860751482 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-it) config: es-it split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 74.83418886368217 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-fr) config: de-fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 56.01064022625769 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-pl) config: de-pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 53.64332829635126 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr-pl) config: fr-pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_spearman value: 73.24670207647144 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_spearman value: 80.7157790971544 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.45763616928973 - task: type: STS dataset: type: stsb_multi_mt name: MTEB STSBenchmarkMultilingualSTS (fr) config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_spearman value: 84.4335500335282 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.15276484499303 - task: type: Retrieval dataset: type: mteb/scifact name: MTEB SciFact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: ndcg_at_10 value: 73.433 - task: type: Retrieval dataset: type: clarin-knext/scifact-pl name: MTEB SciFact-PL config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: ndcg_at_10 value: 58.919999999999995 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_ap value: 95.40564890916419 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 63.41856697730145 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.709285904909112 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.09341030060322 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_spearman value: 30.58262517835034 - task: type: Summarization dataset: type: lyon-nlp/summarization-summeval-fr-p2p name: MTEB SummEvalFr config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_spearman value: 29.744542072951358 - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-syntec-s2p name: MTEB SyntecReranking config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 88.03333333333333 - task: type: Retrieval dataset: type: lyon-nlp/mteb-fr-retrieval-syntec-s2p name: MTEB SyntecRetrieval config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: ndcg_at_10 value: 83.043 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.08577894804324 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: ndcg_at_10 value: 84.718 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 48.726 - task: type: Retrieval dataset: type: mteb/trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: ndcg_at_10 value: 57.56 - task: type: Retrieval dataset: type: clarin-knext/trec-covid-pl name: MTEB TRECCOVID-PL config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: ndcg_at_10 value: 59.355999999999995 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (sqi-eng) config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 82.765 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fry-eng) config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 73.69942196531792 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kur-eng) config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 32.86585365853657 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tur-eng) config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 95.81666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (deu-eng) config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 97.75 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nld-eng) config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 93.78333333333335 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ron-eng) config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 90.72333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ang-eng) config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 42.45202558635395 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ido-eng) config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 77.59238095238095 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jav-eng) config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 35.69686411149825 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (isl-eng) config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 82.59333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slv-eng) config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 84.1456922987907 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cym-eng) config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 52.47462133594857 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kaz-eng) config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 67.62965440356746 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (est-eng) config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 79.48412698412699 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (heb-eng) config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 75.85 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gla-eng) config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 27.32600866497127 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mar-eng) config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 84.38 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lat-eng) config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 42.98888712165028 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bel-eng) config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 85.55690476190476 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pms-eng) config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 46.68466031323174 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gle-eng) config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 32.73071428571428 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pes-eng) config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 88.26333333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nob-eng) config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 96.61666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bul-eng) config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.30666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cbk-eng) config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 70.03714285714285 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hun-eng) config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 89.09 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uig-eng) config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 59.570476190476185 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (rus-eng) config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 92.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (spa-eng) config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 97.68333333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hye-eng) config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 80.40880503144653 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tel-eng) config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 89.7008547008547 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (afr-eng) config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 81.84833333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mon-eng) config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 71.69696969696969 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arz-eng) config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 55.76985790822269 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hrv-eng) config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.66666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nov-eng) config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 68.36668519547896 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gsw-eng) config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 36.73992673992674 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nds-eng) config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 63.420952380952365 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ukr-eng) config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.28999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uzb-eng) config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 40.95392490046146 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lit-eng) config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 77.58936507936508 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ina-eng) config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.28999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lfn-eng) config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 63.563650793650794 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (zsm-eng) config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 94.35 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ita-eng) config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.43 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cmn-eng) config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 95.73333333333332 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lvs-eng) config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 79.38666666666667 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (glg-eng) config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 89.64 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ceb-eng) config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 21.257184628237262 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bre-eng) config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 13.592316017316017 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ben-eng) config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 73.22666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swg-eng) config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 51.711309523809526 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arq-eng) config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 24.98790634904795 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kab-eng) config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 17.19218192918193 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fra-eng) config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 93.26666666666667 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (por-eng) config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 94.57333333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tat-eng) config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 42.35127206127206 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (oci-eng) config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 51.12318903318903 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pol-eng) config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (war-eng) config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 23.856320290390055 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (aze-eng) config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 79.52833333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (vie-eng) config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 95.93333333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nno-eng) config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 90.75333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cha-eng) config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 30.802919708029197 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mhr-eng) config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 15.984076294076294 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dan-eng) config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.82666666666667 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ell-eng) config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (amh-eng) config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 76.36054421768706 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pam-eng) config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 9.232711399711398 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hsb-eng) config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 45.640803181175855 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (srp-eng) config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 86.29 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (epo-eng) config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 88.90833333333332 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kzj-eng) config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 11.11880248978075 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (awa-eng) config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 48.45839345839346 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fao-eng) config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 65.68157033805888 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mal-eng) config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 94.63852498786997 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ile-eng) config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 81.67904761904761 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bos-eng) config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 89.35969868173258 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cor-eng) config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 5.957229437229437 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cat-eng) config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 91.50333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (eus-eng) config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 63.75498778998778 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yue-eng) config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 82.99190476190476 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swe-eng) config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 92.95 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dtp-eng) config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 9.054042624042623 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kat-eng) config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 72.77064981488574 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jpn-eng) config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 93.14 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (csb-eng) config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 29.976786498525627 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (xho-eng) config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 67.6525821596244 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (orv-eng) config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 33.12964812964813 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ind-eng) config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 92.30666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tuk-eng) config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 34.36077879427633 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (max-eng) config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 52.571845212690285 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swh-eng) config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 58.13107263107262 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hin-eng) config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 93.33333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dsb-eng) config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 42.87370133925458 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ber-eng) config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 20.394327616827614 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tam-eng) config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 84.29967426710098 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slk-eng) config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 88.80666666666667 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tgl-eng) config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 67.23062271062273 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ast-eng) config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 78.08398950131233 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mkd-eng) config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 77.85166666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (khm-eng) config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 67.63004001231148 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ces-eng) config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 89.77000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tzl-eng) config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 40.2654503616042 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (urd-eng) config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 83.90333333333334 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ara-eng) config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 77.80666666666666 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kor-eng) config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 84.08 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yid-eng) config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 60.43098607367475 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fin-eng) config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 88.19333333333333 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tha-eng) config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 90.55352798053529 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (wuu-eng) config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: f1 value: 88.44999999999999 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 57.25416429643288 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 56.616646560243524 - task: type: Retrieval dataset: type: mteb/touche2020 name: MTEB Touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: ndcg_at_10 value: 22.819 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.02579999999999 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 57.60045274476514 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 50.346666699466205 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_ap value: 71.88199004440489 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_ap value: 85.41587779677383 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: ndcg_at_10 value: 72.792 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 82.58000000000001 - task: type: Retrieval dataset: type: jinaai/xpqa name: MTEB XPQARetrieval (fr) config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: ndcg_at_10 value: 67.327 --- ## gte-multilingual-base The **gte-multilingual-base** model is the latest in the [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) (General Text Embedding) family of models, featuring several key attributes: - **High Performance**: Achieves state-of-the-art (SOTA) results in multilingual retrieval tasks and multi-task representation model evaluations when compared to models of similar size. - **Training Architecture**: Trained using an encoder-only transformers architecture, resulting in a smaller model size. Unlike previous models based on decode-only LLM architecture (e.g., gte-qwen2-1.5b-instruct), this model has lower hardware requirements for inference, offering a 10x increase in inference speed. - **Long Context**: Supports text lengths up to **8192** tokens. - **Multilingual Capability**: Supports over **70** languages. - **Elastic Dense Embedding**: Support elastic output dense representation while maintaining the effectiveness of downstream tasks, which significantly reduces storage costs and improves execution efficiency. - **Sparse Vectors**: In addition to dense representations, it can also generate sparse vectors. **Paper**: [mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669) ## Model Information - Model Size: 305M - Embedding Dimension: 768 - Max Input Tokens: 8192 ## Usage - **It is recommended to install xformers and enable unpadding for acceleration, refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).** - **How to use it offline: [new-impl/discussions/2](https://huggingface.co/Alibaba-NLP/new-impl/discussions/2#662b08d04d8c3d0a09c88fa3)** - **How to use with [TEI](https://github.com/huggingface/text-embeddings-inference): [refs/pr/7](https://huggingface.co/Alibaba-NLP/gte-multilingual-base/discussions/7#66bfb82ea03b764ca92a2221)** ### Get Dense Embeddings with Transformers ``` # Requires transformers>=4.36.0 import torch.nn.functional as F from transformers import AutoModel, AutoTokenizer input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "北京", "快排算法介绍" ] model_name_or_path = 'Alibaba-NLP/gte-multilingual-base' tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True) # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) dimension=768 # The output dimension of the output embedding, should be in [128, 768] embeddings = outputs.last_hidden_state[:, 0][:dimension] embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) # [[0.3016996383666992, 0.7503870129585266, 0.3203084468841553]] ``` ### Use with sentence-transformers ``` # Requires sentences-transformers>=3.0.0 from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim import numpy as np input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "北京", "快排算法介绍" ] model_name_or_path="Alibaba-NLP/gte-multilingual-base" model = SentenceTransformer(', trust_remote_code=True) embeddings = model.encode(input_texts) # embeddings.shape (4, 768) # normalized embeddings norms = np.linalg.norm(embeddings, ord=2, axis=1, keepdims=True) norms[norms == 0] = 1 embeddings = embeddings / norms # sim scores scores = (embeddings[:1] @ embeddings[1:].T) print(scores.tolist()) # [[0.301699697971344, 0.7503870129585266, 0.32030850648880005]] ``` ### Use with custom code to get dense embeddigns and sparse token weights ``` # You can find the script gte_embedding.py in https://huggingface.co/Alibaba-NLP/gte-multilingual-base/blob/main/scripts/gte_embedding.py from gte_embedding import GTEEmbeddidng model_name_or_path = 'Alibaba-NLP/gte-multilingual-base' model = GTEEmbeddidng(model_name_or_path) query = "中国的首都在哪儿" docs = [ "what is the capital of China?", "how to implement quick sort in python?", "北京", "快排算法介绍" ] embs = model.encode(docs, return_dense=True,return_sparse=True) print('dense_embeddings vecs', embs['dense_embeddings']) print('token_weights', embs['token_weights']) pairs = [(query, doc) for doc in docs] dense_scores = model.compute_scores(pairs, dense_weight=1.0, sparse_weight=0.0) sparse_scores = model.compute_scores(pairs, dense_weight=0.0, sparse_weight=1.0) hybrid_scores = model.compute_scores(pairs, dense_weight=1.0, sparse_weight=0.3) print('dense_scores', dense_scores) print('sparse_scores', sparse_scores) print('hybrid_scores', hybrid_scores) # dense_scores [0.85302734375, 0.257568359375, 0.76953125, 0.325439453125] # sparse_scores [0.0, 0.0, 4.600879669189453, 1.570279598236084] # hybrid_scores [0.85302734375, 0.257568359375, 2.1497951507568356, 0.7965233325958252] ``` ## Evaluation We validated the performance of the **gte-multilingual-base** model on multiple downstream tasks, including multilingual retrieval, cross-lingual retrieval, long text retrieval, and general text representation evaluation on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard), among others. ### Retrieval Task Retrieval results on [MIRACL](https://arxiv.org/abs/2210.09984) and [MLDR](https://arxiv.org/abs/2402.03216) (multilingual), [MKQA](https://arxiv.org/abs/2007.15207) (crosslingual), [BEIR](https://arxiv.org/abs/2104.08663) and [LoCo](https://arxiv.org/abs/2402.07440) (English). ![image](./images/mgte-retrieval.png) - Detail results on [MLDR](https://arxiv.org/abs/2402.03216) ![image](./images/mgte-retrieval.png) - Detail results on [LoCo](https://arxiv.org/abs/2402.07440) ### MTEB Results on MTEB English, Chinese, French, Polish ![image](./images/mgte-mteb.png) **More detailed experimental results can be found in the [paper](https://arxiv.org/pdf/2407.19669)**. ## Cloud API Services In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud. - [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service. - [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available. Note that the models behind the commercial APIs are not entirely identical to the open-source models. ## Citation If you find our paper or models helpful, please consider cite: ``` @misc{zhang2024mgte, title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval}, author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang}, year={2024}, eprint={2407.19669}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.19669}, } ```
Efficient-Large-Model/VILA1.5-3b
Efficient-Large-Model
"2024-07-18T20:55:02Z"
59,841
13
transformers
[ "transformers", "safetensors", "llava_llama", "VILA", "VLM", "text-generation", "arxiv:2312.07533", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-27T21:25:30Z"
--- license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-generation tags: - VILA - VLM --- # VILA Model Card ## Model details **Model type:** VILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. VILA is deployable on the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance. VILA unveils appealing capabilities, including: multi-image reasoning, in-context learning, visual chain-of-thought, and better world knowledge. **Model date:** VILA1.5-40b was trained in May 2024. **Paper or resources for more information:** https://github.com/NVLabs/VILA ``` @misc{lin2023vila, title={VILA: On Pre-training for Visual Language Models}, author={Ji Lin and Hongxu Yin and Wei Ping and Yao Lu and Pavlo Molchanov and Andrew Tao and Huizi Mao and Jan Kautz and Mohammad Shoeybi and Song Han}, year={2023}, eprint={2312.07533}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## License - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file. - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en). - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms: - [Model License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training. **Where to send questions or comments about the model:** https://github.com/NVLabs/VILA/issues ## Intended use **Primary intended uses:** The primary use of VILA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Model Architecture: **Architecture Type:** Transformer **Network Architecture:** siglip, shearedllama ## Input: **Input Type:** Image, Video, Text **Input Format:** Red, Green, Blue; MP4 ;String **Input Parameters:** 2D, 3D ## Output: **Output Type:** Text **Output Format:** String **Supported Hardware Microarchitecture Compatibility:** * Ampere * Jetson * Hopper * Lovelace **[Preferred/Supported] Operating System(s):** <br> Linux ## Model Version(s): * VILA1.5-3B * VILA1.5-3B-s2 * Llama-3-VILA1.5-8B * VILA1.5-13B * VILA1.5-40B * VILA1.5-3B-AWQ * VILA1.5-3B-s2-AWQ * Llama-3-VILA1.5-8B-AWQ * VILA1.5-13B-AWQ * VILA1.5-40B-AWQ ## Training dataset See [Dataset Preparation](https://github.com/NVLabs/VILA/blob/main/data_prepare/README.md) for more details. ** Data Collection Method by dataset * [Hybrid: Automated, Human] ** Labeling Method by dataset * [Hybrid: Automated, Human] **Properties (Quantity, Dataset Descriptions, Sensor(s)):** 53 million image-text pairs or interleaved image text content. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. ## Inference: **Engine:** [Tensor(RT), Triton, Or List Other Here] * PyTorch * TensorRT-LLM * TinyChat **Test Hardware:** * A100 * Jetson Orin * RTX 4090 ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
depth-anything/Depth-Anything-V2-Large-hf
depth-anything
"2024-07-05T11:30:29Z"
59,638
5
transformers
[ "transformers", "safetensors", "depth_anything", "depth-estimation", "depth", "relative depth", "arxiv:2406.09414", "arxiv:2401.10891", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
depth-estimation
"2024-06-20T15:31:25Z"
--- library_name: transformers library: transformers license: cc-by-nc-4.0 tags: - depth - relative depth pipeline_tag: depth-estimation widget: - inference: false --- # Depth Anything V2 Base – Transformers Version Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: - more fine-grained details than Depth Anything V1 - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard) - more efficient (10x faster) and more lightweight than SD-based models - impressive fine-tuned performance with our pre-trained models This model checkpoint is compatible with the transformers library. Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original Depth Anything release, but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The original Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al., and was first released in [this repository](https://github.com/LiheYoung/Depth-Anything). [Online demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2). ## Model description Depth Anything V2 leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone. The model is trained on ~600K synthetic labeled images and ~62 million real unlabeled images, obtaining state-of-the-art results for both relative and absolute depth estimation. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg" alt="drawing" width="600"/> <small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small> ## Intended uses & limitations You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for other versions on a task that interests you. ### How to use Here is how to use this model to perform zero-shot depth estimation: ```python from transformers import pipeline from PIL import Image import requests # load pipe pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Large-hf") # load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # inference depth = pipe(image)["depth"] ``` Alternatively, you can use the model and processor classes: ```python from transformers import AutoImageProcessor, AutoModelForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf") model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Large-hf") # prepare image for the model inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) ``` For more code examples, please refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#). ### Citation ```bibtex @misc{yang2024depth, title={Depth Anything V2}, author={Lihe Yang and Bingyi Kang and Zilong Huang and Zhen Zhao and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao}, year={2024}, eprint={2406.09414}, archivePrefix={arXiv}, primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'} } ```
gsdf/Counterfeit-V2.5
gsdf
"2023-03-14T17:41:46Z"
59,602
1,554
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-02-02T14:02:11Z"
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Update V2.5 has been updated for ease of use as anime-style model. I use this embedding for negative prompts. https://huggingface.co/datasets/gsdf/EasyNegative Share by-products V2.1…Feeling of use similar to V2.0 V2.2…NSFW model # Counterfeit-V2.5 e.g. ![sample1](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample01.png) ``` ((masterpiece,best quality)),1girl, solo, animal ears, rabbit, barefoot, knees up, dress, sitting, rabbit ears, short sleeves, looking at viewer, grass, short hair, smile, white hair, puffy sleeves, outdoors, puffy short sleeves, bangs, on ground, full body, animal, white dress, sunlight, brown eyes, dappled sunlight, day, depth of field Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 448x768, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample2](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample02.png) ``` ((masterpiece,best quality)),1girl, from below, solo, school uniform, serafuku, sky, cloud, black hair, skirt, sailor collar, looking at viewer, short hair, building, bangs, neckerchief, long sleeves, cloudy sky, power lines, shirt, cityscape, pleated skirt, scenery, blunt bangs, city, night, black sailor collar, closed mouth, black skirt, medium hair, school bag , holding bag Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 832x512, Denoising strength: 0.6, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample3](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample03.png) ``` ((masterpiece,best quality)),2girls, black kimono, black legwear, black ribbon, black hair, cherry blossoms, day, flower, hair bun, hair ribbon, japanese clothes, kimono, long hair, looking at viewer, looking back, multiple girls, obi, outdoors, red eyes, red hair, ribbon, sandals, single hair bun, stairs, standing, statue, torii, tree, white kimono, yellow eyes Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 640x960, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample4](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample04.png) ``` ((masterpiece,best quality)),1girl, bangs, blue eyes, blurry background, branch, brown hair, dappled sunlight, flower, from side, hair flower, hair ornament, japanese clothes, kimono, leaf, (maple leaf:1.9), obi, outdoors, sash, solo, sunlight, upper body Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample5](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample05.png) ``` ((masterpiece,best quality))1girl, solo, black skirt, blue eyes, electric guitar, guitar, headphones, holding, holding plectrum, instrument, long hair, , music, one side up, pink hair, playing guiter, pleated skirt, black shirt, indoors Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ``` ![sample6](https://huggingface.co/gsdf/Counterfeit-V2.5/resolve/main/V2.5_sample/sample06.png) ``` ((masterpiece,best quality)), 1girl, food, fruit, solo, skirt, shop, indoors, jacket, shopping, basket, jewelry, shirt, shelf, short hair, black hair, plaid skirt, black jacket, dutch angle, yellow eyes, looking at viewer Negative prompt: EasyNegative, extra fingers,fewer fingers, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 864x512, Denoising strength: 0.58, Hires upscale: 1.8, Hires upscaler: Latent ```
openvla/openvla-7b
openvla
"2024-06-14T01:30:27Z"
59,511
59
transformers
[ "transformers", "safetensors", "openvla", "feature-extraction", "robotics", "vla", "image-text-to-text", "multimodal", "pretraining", "custom_code", "en", "arxiv:2406.09246", "license:mit", "region:us" ]
image-text-to-text
"2024-06-10T16:35:59Z"
--- library_name: transformers tags: - robotics - vla - image-text-to-text - multimodal - pretraining license: mit language: - en pipeline_tag: image-text-to-text --- # OpenVLA 7B OpenVLA 7B (`openvla-7b`) is an open vision-language-action model trained on 970K robot manipulation episodes from the [Open X-Embodiment](https://robotics-transformer-x.github.io/) dataset. The model takes language instructions and camera images as input and generates robot actions. It supports controlling multiple robots out-of-the-box, and can be quickly adapted for new robot domains via (parameter-efficient) fine-tuning. All OpenVLA checkpoints, as well as our [training codebase](https://github.com/openvla/openvla) are released under an MIT License. For full details, please read [our paper](https://arxiv.org/abs/2406.09246) and see [our project page](https://openvla.github.io/). ## Model Summary - **Developed by:** The OpenVLA team consisting of researchers from Stanford, UC Berkeley, Google Deepmind, and the Toyota Research Institute. - **Model type:** Vision-language-action (language, image => robot actions) - **Language(s) (NLP):** en - **License:** MIT - **Finetuned from:** [`prism-dinosiglip-224px`](https://github.com/TRI-ML/prismatic-vlms), a VLM trained from: + **Vision Backbone**: DINOv2 ViT-L/14 and SigLIP ViT-So400M/14 + **Language Model**: Llama-2 - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) -- specific component datasets can be found [here](https://github.com/openvla/openvla). - **Repository:** [https://github.com/openvla/openvla](https://github.com/openvla/openvla) - **Paper:** [OpenVLA: An Open-Source Vision-Language-Action Model](https://arxiv.org/abs/2406.09246) - **Project Page & Videos:** [https://openvla.github.io/](https://openvla.github.io/) ## Uses OpenVLA models take a language instruction and a camera image of a robot workspace as input, and predict (normalized) robot actions consisting of 7-DoF end-effector deltas of the form (x, y, z, roll, pitch, yaw, gripper). To execute on an actual robot platform, actions need to be *un-normalized* subject to statistics computed on a per-robot, per-dataset basis. See [our repository](https://github.com/openvla/openvla) for more information. OpenVLA models can be used zero-shot to control robots for specific combinations of embodiments and domains seen in the Open-X pretraining mixture (e.g., for [BridgeV2 environments with a Widow-X robot](https://rail-berkeley.github.io/bridgedata/)). They can also be efficiently *fine-tuned* for new tasks and robot setups given minimal demonstration data; [see here](https://github.com/openvla/openvla/blob/main/scripts/finetune.py). **Out-of-Scope:** OpenVLA models do not zero-shot generalize to new (unseen) robot embodiments, or setups that are not represented in the pretraining mix; in these cases, we suggest collecting a dataset of demonstrations on the desired setup, and fine-tuning OpenVLA models instead. ## Getting Started OpenVLA 7B can be used to control multiple robots for domains represented in the pretraining mixture out-of-the-box. For example, here is an example for loading `openvla-7b` for zero-shot instruction following in the [BridgeV2 environments] with a Widow-X robot: ```python # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...) # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt from transformers import AutoModelForVision2Seq, AutoProcessor from PIL import Image import torch # Load Processor & VLA processor = AutoProcessor.from_pretrained("openvla/openvla-7b", trust_remote_code=True) vla = AutoModelForVision2Seq.from_pretrained( "openvla/openvla-7b", attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn` torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).to("cuda:0") # Grab image input & format prompt image: Image.Image = get_from_camera(...) prompt = "In: What action should the robot take to {<INSTRUCTION>}?\nOut:" # Predict Action (7-DoF; un-normalize for BridgeV2) inputs = processor(prompt, image).to("cuda:0", dtype=torch.bfloat16) action = vla.predict_action(**inputs, unnorm_key="bridge_orig", do_sample=False) # Execute... robot.act(action, ...) ``` For more examples, including scripts for fine-tuning OpenVLA models on your own robot demonstration datasets, see [our training repository](https://github.com/openvla/openvla). ## Citation **BibTeX:** ```bibtex @article{kim24openvla, title={OpenVLA: An Open-Source Vision-Language-Action Model}, author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn}, journal = {arXiv preprint arXiv:2406.09246}, year={2024} } ```
mixedbread-ai/mxbai-rerank-xsmall-v1
mixedbread-ai
"2024-07-22T14:31:51Z"
59,455
27
transformers
[ "transformers", "onnx", "safetensors", "deberta-v2", "text-classification", "reranker", "transformers.js", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-29T10:31:57Z"
--- library_name: transformers tags: - reranker - transformers.js license: apache-2.0 language: - en --- <br><br> <p align="center"> <svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" viewBox="0 0 2020 1130" width="150" height="150" aria-hidden="true"><path fill="#e95a0f" d="M398.167 621.992c-1.387-20.362-4.092-40.739-3.851-61.081.355-30.085 6.873-59.139 21.253-85.976 10.487-19.573 24.09-36.822 40.662-51.515 16.394-14.535 34.338-27.046 54.336-36.182 15.224-6.955 31.006-12.609 47.829-14.168 11.809-1.094 23.753-2.514 35.524-1.836 23.033 1.327 45.131 7.255 66.255 16.75 16.24 7.3 31.497 16.165 45.651 26.969 12.997 9.921 24.412 21.37 34.158 34.509 11.733 15.817 20.849 33.037 25.987 52.018 3.468 12.81 6.438 25.928 7.779 39.097 1.722 16.908 1.642 34.003 2.235 51.021.427 12.253.224 24.547 1.117 36.762 1.677 22.93 4.062 45.764 11.8 67.7 5.376 15.239 12.499 29.55 20.846 43.681l-18.282 20.328c-1.536 1.71-2.795 3.665-4.254 5.448l-19.323 23.533c-13.859-5.449-27.446-11.803-41.657-16.086-13.622-4.106-27.793-6.765-41.905-8.775-15.256-2.173-30.701-3.475-46.105-4.049-23.571-.879-47.178-1.056-70.769-1.029-10.858.013-21.723 1.116-32.57 1.926-5.362.4-10.69 1.255-16.464 1.477-2.758-7.675-5.284-14.865-7.367-22.181-3.108-10.92-4.325-22.554-13.16-31.095-2.598-2.512-5.069-5.341-6.883-8.443-6.366-10.884-12.48-21.917-18.571-32.959-4.178-7.573-8.411-14.375-17.016-18.559-10.34-5.028-19.538-12.387-29.311-18.611-3.173-2.021-6.414-4.312-9.952-5.297-5.857-1.63-11.98-2.301-17.991-3.376z"></path><path fill="#ed6d7b" d="M1478.998 758.842c-12.025.042-24.05.085-36.537-.373-.14-8.536.231-16.569.453-24.607.033-1.179-.315-2.986-1.081-3.4-.805-.434-2.376.338-3.518.81-.856.354-1.562 1.069-3.589 2.521-.239-3.308-.664-5.586-.519-7.827.488-7.544 2.212-15.166 1.554-22.589-1.016-11.451 1.397-14.592-12.332-14.419-3.793.048-3.617-2.803-3.332-5.331.499-4.422 1.45-8.803 1.77-13.233.311-4.316.068-8.672.068-12.861-2.554-.464-4.326-.86-6.12-1.098-4.415-.586-6.051-2.251-5.065-7.31 1.224-6.279.848-12.862 1.276-19.306.19-2.86-.971-4.473-3.794-4.753-4.113-.407-8.242-1.057-12.352-.975-4.663.093-5.192-2.272-4.751-6.012.733-6.229 1.252-12.483 1.875-18.726l1.102-10.495c-5.905-.309-11.146-.805-16.385-.778-3.32.017-5.174-1.4-5.566-4.4-1.172-8.968-2.479-17.944-3.001-26.96-.26-4.484-1.936-5.705-6.005-5.774-9.284-.158-18.563-.594-27.843-.953-7.241-.28-10.137-2.764-11.3-9.899-.746-4.576-2.715-7.801-7.777-8.207-7.739-.621-15.511-.992-23.207-1.961-7.327-.923-14.587-2.415-21.853-3.777-5.021-.941-10.003-2.086-15.003-3.14 4.515-22.952 13.122-44.382 26.284-63.587 18.054-26.344 41.439-47.239 69.102-63.294 15.847-9.197 32.541-16.277 50.376-20.599 16.655-4.036 33.617-5.715 50.622-4.385 33.334 2.606 63.836 13.955 92.415 31.15 15.864 9.545 30.241 20.86 42.269 34.758 8.113 9.374 15.201 19.78 21.718 30.359 10.772 17.484 16.846 36.922 20.611 56.991 1.783 9.503 2.815 19.214 3.318 28.876.758 14.578.755 29.196.65 44.311l-51.545 20.013c-7.779 3.059-15.847 5.376-21.753 12.365-4.73 5.598-10.658 10.316-16.547 14.774-9.9 7.496-18.437 15.988-25.083 26.631-3.333 5.337-7.901 10.381-12.999 14.038-11.355 8.144-17.397 18.973-19.615 32.423l-6.988 41.011z"></path><path fill="#ec663e" d="M318.11 923.047c-.702 17.693-.832 35.433-2.255 53.068-1.699 21.052-6.293 41.512-14.793 61.072-9.001 20.711-21.692 38.693-38.496 53.583-16.077 14.245-34.602 24.163-55.333 30.438-21.691 6.565-43.814 8.127-66.013 6.532-22.771-1.636-43.88-9.318-62.74-22.705-20.223-14.355-35.542-32.917-48.075-54.096-9.588-16.203-16.104-33.55-19.201-52.015-2.339-13.944-2.307-28.011-.403-42.182 2.627-19.545 9.021-37.699 17.963-55.067 11.617-22.564 27.317-41.817 48.382-56.118 15.819-10.74 33.452-17.679 52.444-20.455 8.77-1.282 17.696-1.646 26.568-2.055 11.755-.542 23.534-.562 35.289-1.11 8.545-.399 17.067-1.291 26.193-1.675 1.349 1.77 2.24 3.199 2.835 4.742 4.727 12.261 10.575 23.865 18.636 34.358 7.747 10.084 14.83 20.684 22.699 30.666 3.919 4.972 8.37 9.96 13.609 13.352 7.711 4.994 16.238 8.792 24.617 12.668 5.852 2.707 12.037 4.691 18.074 6.998z"></path><path fill="#ea580e" d="M1285.167 162.995c3.796-29.75 13.825-56.841 32.74-80.577 16.339-20.505 36.013-36.502 59.696-47.614 14.666-6.881 29.971-11.669 46.208-12.749 10.068-.669 20.239-1.582 30.255-.863 16.6 1.191 32.646 5.412 47.9 12.273 19.39 8.722 36.44 20.771 50.582 36.655 15.281 17.162 25.313 37.179 31.49 59.286 5.405 19.343 6.31 39.161 4.705 58.825-2.37 29.045-11.836 55.923-30.451 78.885-10.511 12.965-22.483 24.486-37.181 33.649-5.272-5.613-10.008-11.148-14.539-16.846-5.661-7.118-10.958-14.533-16.78-21.513-4.569-5.478-9.548-10.639-14.624-15.658-3.589-3.549-7.411-6.963-11.551-9.827-5.038-3.485-10.565-6.254-15.798-9.468-8.459-5.195-17.011-9.669-26.988-11.898-12.173-2.72-24.838-4.579-35.622-11.834-1.437-.967-3.433-1.192-5.213-1.542-12.871-2.529-25.454-5.639-36.968-12.471-5.21-3.091-11.564-4.195-17.011-6.965-4.808-2.445-8.775-6.605-13.646-8.851-8.859-4.085-18.114-7.311-27.204-10.896z"></path><path fill="#f8ab00" d="M524.963 311.12c-9.461-5.684-19.513-10.592-28.243-17.236-12.877-9.801-24.031-21.578-32.711-35.412-11.272-17.965-19.605-37.147-21.902-58.403-1.291-11.951-2.434-24.073-1.87-36.034.823-17.452 4.909-34.363 11.581-50.703 8.82-21.603 22.25-39.792 39.568-55.065 18.022-15.894 39.162-26.07 62.351-32.332 19.22-5.19 38.842-6.177 58.37-4.674 23.803 1.831 45.56 10.663 65.062 24.496 17.193 12.195 31.688 27.086 42.894 45.622-11.403 8.296-22.633 16.117-34.092 23.586-17.094 11.142-34.262 22.106-48.036 37.528-8.796 9.848-17.201 20.246-27.131 28.837-16.859 14.585-27.745 33.801-41.054 51.019-11.865 15.349-20.663 33.117-30.354 50.08-5.303 9.283-9.654 19.11-14.434 28.692z"></path><path fill="#ea5227" d="M1060.11 1122.049c-7.377 1.649-14.683 4.093-22.147 4.763-11.519 1.033-23.166 1.441-34.723 1.054-19.343-.647-38.002-4.7-55.839-12.65-15.078-6.72-28.606-15.471-40.571-26.836-24.013-22.81-42.053-49.217-49.518-81.936-1.446-6.337-1.958-12.958-2.235-19.477-.591-13.926-.219-27.909-1.237-41.795-.916-12.5-3.16-24.904-4.408-37.805 1.555-1.381 3.134-2.074 3.778-3.27 4.729-8.79 12.141-15.159 19.083-22.03 5.879-5.818 10.688-12.76 16.796-18.293 6.993-6.335 11.86-13.596 14.364-22.612l8.542-29.993c8.015 1.785 15.984 3.821 24.057 5.286 8.145 1.478 16.371 2.59 24.602 3.493 8.453.927 16.956 1.408 25.891 2.609 1.119 16.09 1.569 31.667 2.521 47.214.676 11.045 1.396 22.154 3.234 33.043 2.418 14.329 5.708 28.527 9.075 42.674 3.499 14.705 4.028 29.929 10.415 44.188 10.157 22.674 18.29 46.25 28.281 69.004 7.175 16.341 12.491 32.973 15.078 50.615.645 4.4 3.256 8.511 4.963 12.755z"></path><path fill="#ea5330" d="M1060.512 1122.031c-2.109-4.226-4.72-8.337-5.365-12.737-2.587-17.642-7.904-34.274-15.078-50.615-9.991-22.755-18.124-46.33-28.281-69.004-6.387-14.259-6.916-29.482-10.415-44.188-3.366-14.147-6.656-28.346-9.075-42.674-1.838-10.889-2.558-21.999-3.234-33.043-.951-15.547-1.401-31.124-2.068-47.146 8.568-.18 17.146.487 25.704.286l41.868-1.4c.907 3.746 1.245 7.04 1.881 10.276l8.651 42.704c.903 4.108 2.334 8.422 4.696 11.829 7.165 10.338 14.809 20.351 22.456 30.345 4.218 5.512 8.291 11.304 13.361 15.955 8.641 7.927 18.065 14.995 27.071 22.532 12.011 10.052 24.452 19.302 40.151 22.854-1.656 11.102-2.391 22.44-5.172 33.253-4.792 18.637-12.38 36.209-23.412 52.216-13.053 18.94-29.086 34.662-49.627 45.055-10.757 5.443-22.443 9.048-34.111 13.501z"></path><path fill="#f8aa05" d="M1989.106 883.951c5.198 8.794 11.46 17.148 15.337 26.491 5.325 12.833 9.744 26.207 12.873 39.737 2.95 12.757 3.224 25.908 1.987 39.219-1.391 14.973-4.643 29.268-10.349 43.034-5.775 13.932-13.477 26.707-23.149 38.405-14.141 17.104-31.215 30.458-50.807 40.488-14.361 7.352-29.574 12.797-45.741 14.594-10.297 1.144-20.732 2.361-31.031 1.894-24.275-1.1-47.248-7.445-68.132-20.263-6.096-3.741-11.925-7.917-17.731-12.342 5.319-5.579 10.361-10.852 15.694-15.811l37.072-34.009c.975-.892 2.113-1.606 3.08-2.505 6.936-6.448 14.765-12.2 20.553-19.556 8.88-11.285 20.064-19.639 31.144-28.292 4.306-3.363 9.06-6.353 12.673-10.358 5.868-6.504 10.832-13.814 16.422-20.582 6.826-8.264 13.727-16.481 20.943-24.401 4.065-4.461 8.995-8.121 13.249-12.424 14.802-14.975 28.77-30.825 45.913-43.317z"></path><path fill="#ed6876" d="M1256.099 523.419c5.065.642 10.047 1.787 15.068 2.728 7.267 1.362 14.526 2.854 21.853 3.777 7.696.97 15.468 1.34 23.207 1.961 5.062.406 7.031 3.631 7.777 8.207 1.163 7.135 4.059 9.62 11.3 9.899l27.843.953c4.069.069 5.745 1.291 6.005 5.774.522 9.016 1.829 17.992 3.001 26.96.392 3 2.246 4.417 5.566 4.4 5.239-.026 10.48.469 16.385.778l-1.102 10.495-1.875 18.726c-.44 3.74.088 6.105 4.751 6.012 4.11-.082 8.239.568 12.352.975 2.823.28 3.984 1.892 3.794 4.753-.428 6.444-.052 13.028-1.276 19.306-.986 5.059.651 6.724 5.065 7.31 1.793.238 3.566.634 6.12 1.098 0 4.189.243 8.545-.068 12.861-.319 4.43-1.27 8.811-1.77 13.233-.285 2.528-.461 5.379 3.332 5.331 13.729-.173 11.316 2.968 12.332 14.419.658 7.423-1.066 15.045-1.554 22.589-.145 2.241.28 4.519.519 7.827 2.026-1.452 2.733-2.167 3.589-2.521 1.142-.472 2.713-1.244 3.518-.81.767.414 1.114 2.221 1.081 3.4l-.917 24.539c-11.215.82-22.45.899-33.636 1.674l-43.952 3.436c-1.086-3.01-2.319-5.571-2.296-8.121.084-9.297-4.468-16.583-9.091-24.116-3.872-6.308-8.764-13.052-9.479-19.987-1.071-10.392-5.716-15.936-14.889-18.979-1.097-.364-2.16-.844-3.214-1.327-7.478-3.428-15.548-5.918-19.059-14.735-.904-2.27-3.657-3.775-5.461-5.723-2.437-2.632-4.615-5.525-7.207-7.987-2.648-2.515-5.352-5.346-8.589-6.777-4.799-2.121-10.074-3.185-15.175-4.596l-15.785-4.155c.274-12.896 1.722-25.901.54-38.662-1.647-17.783-3.457-35.526-2.554-53.352.528-10.426 2.539-20.777 3.948-31.574z"></path><path fill="#f6a200" d="M525.146 311.436c4.597-9.898 8.947-19.725 14.251-29.008 9.691-16.963 18.49-34.73 30.354-50.08 13.309-17.218 24.195-36.434 41.054-51.019 9.93-8.591 18.335-18.989 27.131-28.837 13.774-15.422 30.943-26.386 48.036-37.528 11.459-7.469 22.688-15.29 34.243-23.286 11.705 16.744 19.716 35.424 22.534 55.717 2.231 16.066 2.236 32.441 2.753 49.143-4.756 1.62-9.284 2.234-13.259 4.056-6.43 2.948-12.193 7.513-18.774 9.942-19.863 7.331-33.806 22.349-47.926 36.784-7.86 8.035-13.511 18.275-19.886 27.705-4.434 6.558-9.345 13.037-12.358 20.254-4.249 10.177-6.94 21.004-10.296 31.553-12.33.053-24.741 1.027-36.971-.049-20.259-1.783-40.227-5.567-58.755-14.69-.568-.28-1.295-.235-2.132-.658z"></path><path fill="#f7a80d" d="M1989.057 883.598c-17.093 12.845-31.061 28.695-45.863 43.67-4.254 4.304-9.184 7.963-13.249 12.424-7.216 7.92-14.117 16.137-20.943 24.401-5.59 6.768-10.554 14.078-16.422 20.582-3.614 4.005-8.367 6.995-12.673 10.358-11.08 8.653-22.264 17.007-31.144 28.292-5.788 7.356-13.617 13.108-20.553 19.556-.967.899-2.105 1.614-3.08 2.505l-37.072 34.009c-5.333 4.96-10.375 10.232-15.859 15.505-21.401-17.218-37.461-38.439-48.623-63.592 3.503-1.781 7.117-2.604 9.823-4.637 8.696-6.536 20.392-8.406 27.297-17.714.933-1.258 2.646-1.973 4.065-2.828 17.878-10.784 36.338-20.728 53.441-32.624 10.304-7.167 18.637-17.23 27.583-26.261 3.819-3.855 7.436-8.091 10.3-12.681 12.283-19.68 24.43-39.446 40.382-56.471 12.224-13.047 17.258-29.524 22.539-45.927 15.85 4.193 29.819 12.129 42.632 22.08 10.583 8.219 19.782 17.883 27.42 29.351z"></path><path fill="#ef7a72" d="M1479.461 758.907c1.872-13.734 4.268-27.394 6.525-41.076 2.218-13.45 8.26-24.279 19.615-32.423 5.099-3.657 9.667-8.701 12.999-14.038 6.646-10.643 15.183-19.135 25.083-26.631 5.888-4.459 11.817-9.176 16.547-14.774 5.906-6.99 13.974-9.306 21.753-12.365l51.48-19.549c.753 11.848.658 23.787 1.641 35.637 1.771 21.353 4.075 42.672 11.748 62.955.17.449.107.985-.019 2.158-6.945 4.134-13.865 7.337-20.437 11.143-3.935 2.279-7.752 5.096-10.869 8.384-6.011 6.343-11.063 13.624-17.286 19.727-9.096 8.92-12.791 20.684-18.181 31.587-.202.409-.072.984-.096 1.481-8.488-1.72-16.937-3.682-25.476-5.094-9.689-1.602-19.426-3.084-29.201-3.949-15.095-1.335-30.241-2.1-45.828-3.172z"></path><path fill="#e94e3b" d="M957.995 766.838c-20.337-5.467-38.791-14.947-55.703-27.254-8.2-5.967-15.451-13.238-22.958-20.37 2.969-3.504 5.564-6.772 8.598-9.563 7.085-6.518 11.283-14.914 15.8-23.153 4.933-8.996 10.345-17.743 14.966-26.892 2.642-5.231 5.547-11.01 5.691-16.611.12-4.651.194-8.932 2.577-12.742 8.52-13.621 15.483-28.026 18.775-43.704 2.11-10.049 7.888-18.774 7.81-29.825-.064-9.089 4.291-18.215 6.73-27.313 3.212-11.983 7.369-23.797 9.492-35.968 3.202-18.358 5.133-36.945 7.346-55.466l4.879-45.8c6.693.288 13.386.575 20.54 1.365.13 3.458-.41 6.407-.496 9.37l-1.136 42.595c-.597 11.552-2.067 23.058-3.084 34.59l-3.845 44.478c-.939 10.202-1.779 20.432-3.283 30.557-.96 6.464-4.46 12.646-1.136 19.383.348.706-.426 1.894-.448 2.864-.224 9.918-5.99 19.428-2.196 29.646.103.279-.033.657-.092.983l-8.446 46.205c-1.231 6.469-2.936 12.846-4.364 19.279-1.5 6.757-2.602 13.621-4.456 20.277-3.601 12.93-10.657 25.3-5.627 39.47.368 1.036.234 2.352.017 3.476l-5.949 30.123z"></path><path fill="#ea5043" d="M958.343 767.017c1.645-10.218 3.659-20.253 5.602-30.302.217-1.124.351-2.44-.017-3.476-5.03-14.17 2.026-26.539 5.627-39.47 1.854-6.656 2.956-13.52 4.456-20.277 1.428-6.433 3.133-12.81 4.364-19.279l8.446-46.205c.059-.326.196-.705.092-.983-3.794-10.218 1.972-19.728 2.196-29.646.022-.97.796-2.158.448-2.864-3.324-6.737.176-12.919 1.136-19.383 1.504-10.125 2.344-20.355 3.283-30.557l3.845-44.478c1.017-11.532 2.488-23.038 3.084-34.59.733-14.18.722-28.397 1.136-42.595.086-2.963.626-5.912.956-9.301 5.356-.48 10.714-.527 16.536-.081 2.224 15.098 1.855 29.734 1.625 44.408-.157 10.064 1.439 20.142 1.768 30.23.334 10.235-.035 20.49.116 30.733.084 5.713.789 11.418.861 17.13.054 4.289-.469 8.585-.702 12.879-.072 1.323-.138 2.659-.031 3.975l2.534 34.405-1.707 36.293-1.908 48.69c-.182 8.103.993 16.237.811 24.34-.271 12.076-1.275 24.133-1.787 36.207-.102 2.414-.101 5.283 1.06 7.219 4.327 7.22 4.463 15.215 4.736 23.103.365 10.553.088 21.128.086 31.693-11.44 2.602-22.84.688-34.106-.916-11.486-1.635-22.806-4.434-34.546-6.903z"></path><path fill="#eb5d19" d="M398.091 622.45c6.086.617 12.21 1.288 18.067 2.918 3.539.985 6.779 3.277 9.952 5.297 9.773 6.224 18.971 13.583 29.311 18.611 8.606 4.184 12.839 10.986 17.016 18.559l18.571 32.959c1.814 3.102 4.285 5.931 6.883 8.443 8.835 8.542 10.052 20.175 13.16 31.095 2.082 7.317 4.609 14.507 6.946 22.127-29.472 3.021-58.969 5.582-87.584 15.222-1.185-2.302-1.795-4.362-2.769-6.233-4.398-8.449-6.703-18.174-14.942-24.299-2.511-1.866-5.103-3.814-7.047-6.218-8.358-10.332-17.028-20.276-28.772-26.973 4.423-11.478 9.299-22.806 13.151-34.473 4.406-13.348 6.724-27.18 6.998-41.313.098-5.093.643-10.176 1.06-15.722z"></path><path fill="#e94c32" d="M981.557 392.109c-1.172 15.337-2.617 30.625-4.438 45.869-2.213 18.521-4.144 37.108-7.346 55.466-2.123 12.171-6.28 23.985-9.492 35.968-2.439 9.098-6.794 18.224-6.73 27.313.078 11.051-5.7 19.776-7.81 29.825-3.292 15.677-10.255 30.082-18.775 43.704-2.383 3.81-2.458 8.091-2.577 12.742-.144 5.6-3.049 11.38-5.691 16.611-4.621 9.149-10.033 17.896-14.966 26.892-4.517 8.239-8.715 16.635-15.8 23.153-3.034 2.791-5.629 6.06-8.735 9.255-12.197-10.595-21.071-23.644-29.301-37.24-7.608-12.569-13.282-25.962-17.637-40.37 13.303-6.889 25.873-13.878 35.311-25.315.717-.869 1.934-1.312 2.71-2.147 5.025-5.405 10.515-10.481 14.854-16.397 6.141-8.374 10.861-17.813 17.206-26.008 8.22-10.618 13.657-22.643 20.024-34.466 4.448-.626 6.729-3.21 8.114-6.89 1.455-3.866 2.644-7.895 4.609-11.492 4.397-8.05 9.641-15.659 13.708-23.86 3.354-6.761 5.511-14.116 8.203-21.206 5.727-15.082 7.277-31.248 12.521-46.578 3.704-10.828 3.138-23.116 4.478-34.753l7.56-.073z"></path><path fill="#f7a617" d="M1918.661 831.99c-4.937 16.58-9.971 33.057-22.196 46.104-15.952 17.025-28.099 36.791-40.382 56.471-2.864 4.59-6.481 8.825-10.3 12.681-8.947 9.031-17.279 19.094-27.583 26.261-17.103 11.896-35.564 21.84-53.441 32.624-1.419.856-3.132 1.571-4.065 2.828-6.904 9.308-18.6 11.178-27.297 17.714-2.705 2.033-6.319 2.856-9.874 4.281-3.413-9.821-6.916-19.583-9.36-29.602-1.533-6.284-1.474-12.957-1.665-19.913 1.913-.78 3.374-1.057 4.81-1.431 15.822-4.121 31.491-8.029 43.818-20.323 9.452-9.426 20.371-17.372 30.534-26.097 6.146-5.277 13.024-10.052 17.954-16.326 14.812-18.848 28.876-38.285 43.112-57.581 2.624-3.557 5.506-7.264 6.83-11.367 2.681-8.311 4.375-16.94 6.476-25.438 17.89.279 35.333 3.179 52.629 9.113z"></path><path fill="#ea553a" d="M1172.91 977.582c-15.775-3.127-28.215-12.377-40.227-22.43-9.005-7.537-18.43-14.605-27.071-22.532-5.07-4.651-9.143-10.443-13.361-15.955-7.647-9.994-15.291-20.007-22.456-30.345-2.361-3.407-3.792-7.72-4.696-11.829-3.119-14.183-5.848-28.453-8.651-42.704-.636-3.236-.974-6.53-1.452-10.209 15.234-2.19 30.471-3.969 46.408-5.622 2.692 5.705 4.882 11.222 6.63 16.876 2.9 9.381 7.776 17.194 15.035 24.049 7.056 6.662 13.305 14.311 19.146 22.099 9.509 12.677 23.01 19.061 36.907 25.054-1.048 7.441-2.425 14.854-3.066 22.33-.956 11.162-1.393 22.369-2.052 33.557l-1.096 17.661z"></path><path fill="#ea5453" d="M1163.123 704.036c-4.005 5.116-7.685 10.531-12.075 15.293-12.842 13.933-27.653 25.447-44.902 34.538-3.166-5.708-5.656-11.287-8.189-17.251-3.321-12.857-6.259-25.431-9.963-37.775-4.6-15.329-10.6-30.188-11.349-46.562-.314-6.871-1.275-14.287-7.114-19.644-1.047-.961-1.292-3.053-1.465-4.67l-4.092-39.927c-.554-5.245-.383-10.829-2.21-15.623-3.622-9.503-4.546-19.253-4.688-29.163-.088-6.111 1.068-12.256.782-18.344-.67-14.281-1.76-28.546-2.9-42.8-.657-8.222-1.951-16.395-2.564-24.62-.458-6.137-.285-12.322-.104-18.21.959 5.831 1.076 11.525 2.429 16.909 2.007 7.986 5.225 15.664 7.324 23.632 3.222 12.23 1.547 25.219 6.728 37.355 4.311 10.099 6.389 21.136 9.732 31.669 2.228 7.02 6.167 13.722 7.121 20.863 1.119 8.376 6.1 13.974 10.376 20.716l2.026 10.576c1.711 9.216 3.149 18.283 8.494 26.599 6.393 9.946 11.348 20.815 16.943 31.276 4.021 7.519 6.199 16.075 12.925 22.065l24.462 22.26c.556.503 1.507.571 2.274.841z"></path><path fill="#ea5b15" d="M1285.092 163.432c9.165 3.148 18.419 6.374 27.279 10.459 4.871 2.246 8.838 6.406 13.646 8.851 5.446 2.77 11.801 3.874 17.011 6.965 11.514 6.831 24.097 9.942 36.968 12.471 1.78.35 3.777.576 5.213 1.542 10.784 7.255 23.448 9.114 35.622 11.834 9.977 2.23 18.529 6.703 26.988 11.898 5.233 3.214 10.76 5.983 15.798 9.468 4.14 2.864 7.962 6.279 11.551 9.827 5.076 5.02 10.056 10.181 14.624 15.658 5.822 6.98 11.119 14.395 16.78 21.513 4.531 5.698 9.267 11.233 14.222 16.987-10.005 5.806-20.07 12.004-30.719 16.943-7.694 3.569-16.163 5.464-24.688 7.669-2.878-7.088-5.352-13.741-7.833-20.392-.802-2.15-1.244-4.55-2.498-6.396-4.548-6.7-9.712-12.999-14.011-19.847-6.672-10.627-15.34-18.93-26.063-25.376-9.357-5.625-18.367-11.824-27.644-17.587-6.436-3.997-12.902-8.006-19.659-11.405-5.123-2.577-11.107-3.536-16.046-6.37-17.187-9.863-35.13-17.887-54.031-23.767-4.403-1.37-8.953-2.267-13.436-3.382l.926-27.565z"></path><path fill="#ea504b" d="M1098 737l7.789 16.893c-15.04 9.272-31.679 15.004-49.184 17.995-9.464 1.617-19.122 2.097-29.151 3.019-.457-10.636-.18-21.211-.544-31.764-.273-7.888-.409-15.883-4.736-23.103-1.16-1.936-1.162-4.805-1.06-7.219l1.787-36.207c.182-8.103-.993-16.237-.811-24.34.365-16.236 1.253-32.461 1.908-48.69.484-12 .942-24.001 1.98-36.069 5.57 10.19 10.632 20.42 15.528 30.728 1.122 2.362 2.587 5.09 2.339 7.488-1.536 14.819 5.881 26.839 12.962 38.33 10.008 16.241 16.417 33.54 20.331 51.964 2.285 10.756 4.729 21.394 11.958 30.165L1098 737z"></path><path fill="#f6a320" d="M1865.78 822.529c-1.849 8.846-3.544 17.475-6.224 25.786-1.323 4.102-4.206 7.81-6.83 11.367l-43.112 57.581c-4.93 6.273-11.808 11.049-17.954 16.326-10.162 8.725-21.082 16.671-30.534 26.097-12.327 12.294-27.997 16.202-43.818 20.323-1.436.374-2.897.651-4.744.986-1.107-17.032-1.816-34.076-2.079-51.556 1.265-.535 2.183-.428 2.888-.766 10.596-5.072 20.8-11.059 32.586-13.273 1.69-.317 3.307-1.558 4.732-2.662l26.908-21.114c4.992-4.003 11.214-7.393 14.381-12.585 11.286-18.5 22.363-37.263 27.027-58.87l36.046 1.811c3.487.165 6.983.14 10.727.549z"></path><path fill="#ec6333" d="M318.448 922.814c-6.374-2.074-12.56-4.058-18.412-6.765-8.379-3.876-16.906-7.675-24.617-12.668-5.239-3.392-9.69-8.381-13.609-13.352-7.87-9.983-14.953-20.582-22.699-30.666-8.061-10.493-13.909-22.097-18.636-34.358-.595-1.543-1.486-2.972-2.382-4.783 6.84-1.598 13.797-3.023 20.807-4.106 18.852-2.912 36.433-9.493 53.737-17.819.697.888.889 1.555 1.292 2.051l17.921 21.896c4.14 4.939 8.06 10.191 12.862 14.412 5.67 4.984 12.185 9.007 18.334 13.447-8.937 16.282-16.422 33.178-20.696 51.31-1.638 6.951-2.402 14.107-3.903 21.403z"></path><path fill="#f49700" d="M623.467 326.903c2.893-10.618 5.584-21.446 9.833-31.623 3.013-7.217 7.924-13.696 12.358-20.254 6.375-9.43 12.026-19.67 19.886-27.705 14.12-14.434 28.063-29.453 47.926-36.784 6.581-2.429 12.344-6.994 18.774-9.942 3.975-1.822 8.503-2.436 13.186-3.592 1.947 18.557 3.248 37.15 8.307 55.686-15.453 7.931-28.853 18.092-40.46 29.996-10.417 10.683-19.109 23.111-28.013 35.175-3.238 4.388-4.888 9.948-7.262 14.973-17.803-3.987-35.767-6.498-54.535-5.931z"></path><path fill="#ea544c" d="M1097.956 736.615c-2.925-3.218-5.893-6.822-8.862-10.425-7.229-8.771-9.672-19.409-11.958-30.165-3.914-18.424-10.323-35.722-20.331-51.964-7.081-11.491-14.498-23.511-12.962-38.33.249-2.398-1.217-5.126-2.339-7.488l-15.232-31.019-3.103-34.338c-.107-1.316-.041-2.653.031-3.975.233-4.294.756-8.59.702-12.879-.072-5.713-.776-11.417-.861-17.13l-.116-30.733c-.329-10.088-1.926-20.166-1.768-30.23.23-14.674.599-29.31-1.162-44.341 9.369-.803 18.741-1.179 28.558-1.074 1.446 15.814 2.446 31.146 3.446 46.478.108 6.163-.064 12.348.393 18.485.613 8.225 1.907 16.397 2.564 24.62l2.9 42.8c.286 6.088-.869 12.234-.782 18.344.142 9.91 1.066 19.661 4.688 29.163 1.827 4.794 1.657 10.377 2.21 15.623l4.092 39.927c.172 1.617.417 3.71 1.465 4.67 5.839 5.357 6.8 12.773 7.114 19.644.749 16.374 6.749 31.233 11.349 46.562 3.704 12.344 6.642 24.918 9.963 37.775z"></path><path fill="#ec5c61" d="M1204.835 568.008c1.254 25.351-1.675 50.16-10.168 74.61-8.598-4.883-18.177-8.709-24.354-15.59-7.44-8.289-13.929-17.442-21.675-25.711-8.498-9.072-16.731-18.928-21.084-31.113-.54-1.513-1.691-2.807-2.594-4.564-4.605-9.247-7.706-18.544-7.96-29.09-.835-7.149-1.214-13.944-2.609-20.523-2.215-10.454-5.626-20.496-7.101-31.302-2.513-18.419-7.207-36.512-5.347-55.352.24-2.43-.17-4.949-.477-7.402l-4.468-34.792c2.723-.379 5.446-.757 8.585-.667 1.749 8.781 2.952 17.116 4.448 25.399 1.813 10.037 3.64 20.084 5.934 30.017 1.036 4.482 3.953 8.573 4.73 13.064 1.794 10.377 4.73 20.253 9.272 29.771 2.914 6.105 4.761 12.711 7.496 18.912 2.865 6.496 6.264 12.755 9.35 19.156 3.764 7.805 7.667 15.013 16.1 19.441 7.527 3.952 13.713 10.376 20.983 14.924 6.636 4.152 13.932 7.25 20.937 10.813z"></path><path fill="#ed676f" d="M1140.75 379.231c18.38-4.858 36.222-11.21 53.979-18.971 3.222 3.368 5.693 6.744 8.719 9.512 2.333 2.134 5.451 5.07 8.067 4.923 7.623-.429 12.363 2.688 17.309 8.215 5.531 6.18 12.744 10.854 19.224 16.184-5.121 7.193-10.461 14.241-15.323 21.606-13.691 20.739-22.99 43.255-26.782 67.926-.543 3.536-1.281 7.043-2.366 10.925-14.258-6.419-26.411-14.959-32.731-29.803-1.087-2.553-2.596-4.93-3.969-7.355-1.694-2.993-3.569-5.89-5.143-8.943-1.578-3.062-2.922-6.249-4.295-9.413-1.57-3.621-3.505-7.163-4.47-10.946-1.257-4.93-.636-10.572-2.725-15.013-5.831-12.397-7.467-25.628-9.497-38.847z"></path><path fill="#ed656e" d="M1254.103 647.439c5.325.947 10.603 2.272 15.847 3.722 5.101 1.41 10.376 2.475 15.175 4.596 3.237 1.431 5.942 4.262 8.589 6.777 2.592 2.462 4.77 5.355 7.207 7.987 1.804 1.948 4.557 3.453 5.461 5.723 3.51 8.817 11.581 11.307 19.059 14.735 1.053.483 2.116.963 3.214 1.327 9.172 3.043 13.818 8.587 14.889 18.979.715 6.935 5.607 13.679 9.479 19.987 4.623 7.533 9.175 14.819 9.091 24.116-.023 2.55 1.21 5.111 1.874 8.055-19.861 2.555-39.795 4.296-59.597 9.09l-11.596-23.203c-1.107-2.169-2.526-4.353-4.307-5.975-7.349-6.694-14.863-13.209-22.373-19.723l-17.313-14.669c-2.776-2.245-5.935-4.017-8.92-6.003l11.609-38.185c1.508-5.453 1.739-11.258 2.613-17.336z"></path><path fill="#ec6168" d="M1140.315 379.223c2.464 13.227 4.101 26.459 9.931 38.856 2.089 4.441 1.468 10.083 2.725 15.013.965 3.783 2.9 7.325 4.47 10.946 1.372 3.164 2.716 6.351 4.295 9.413 1.574 3.053 3.449 5.95 5.143 8.943 1.372 2.425 2.882 4.803 3.969 7.355 6.319 14.844 18.473 23.384 32.641 30.212.067 5.121-.501 10.201-.435 15.271l.985 38.117c.151 4.586.616 9.162.868 14.201-7.075-3.104-14.371-6.202-21.007-10.354-7.269-4.548-13.456-10.972-20.983-14.924-8.434-4.428-12.337-11.637-16.1-19.441-3.087-6.401-6.485-12.66-9.35-19.156-2.735-6.201-4.583-12.807-7.496-18.912-4.542-9.518-7.477-19.394-9.272-29.771-.777-4.491-3.694-8.581-4.73-13.064-2.294-9.933-4.121-19.98-5.934-30.017-1.496-8.283-2.699-16.618-4.036-25.335 10.349-2.461 20.704-4.511 31.054-6.582.957-.191 1.887-.515 3.264-.769z"></path><path fill="#e94c28" d="M922 537c-6.003 11.784-11.44 23.81-19.66 34.428-6.345 8.196-11.065 17.635-17.206 26.008-4.339 5.916-9.828 10.992-14.854 16.397-.776.835-1.993 1.279-2.71 2.147-9.439 11.437-22.008 18.427-35.357 24.929-4.219-10.885-6.942-22.155-7.205-33.905l-.514-49.542c7.441-2.893 14.452-5.197 21.334-7.841 1.749-.672 3.101-2.401 4.604-3.681 6.749-5.745 12.845-12.627 20.407-16.944 7.719-4.406 14.391-9.101 18.741-16.889.626-1.122 1.689-2.077 2.729-2.877 7.197-5.533 12.583-12.51 16.906-20.439.68-1.247 2.495-1.876 4.105-2.651 2.835 1.408 5.267 2.892 7.884 3.892 3.904 1.491 4.392 3.922 2.833 7.439-1.47 3.318-2.668 6.756-4.069 10.106-1.247 2.981-.435 5.242 2.413 6.544 2.805 1.282 3.125 3.14 1.813 5.601l-6.907 12.799L922 537z"></path><path fill="#eb5659" d="M1124.995 566c.868 1.396 2.018 2.691 2.559 4.203 4.353 12.185 12.586 22.041 21.084 31.113 7.746 8.269 14.235 17.422 21.675 25.711 6.176 6.881 15.756 10.707 24.174 15.932-6.073 22.316-16.675 42.446-31.058 60.937-1.074-.131-2.025-.199-2.581-.702l-24.462-22.26c-6.726-5.99-8.904-14.546-12.925-22.065-5.594-10.461-10.55-21.33-16.943-31.276-5.345-8.315-6.783-17.383-8.494-26.599-.63-3.394-1.348-6.772-1.738-10.848-.371-6.313-1.029-11.934-1.745-18.052l6.34 4.04 1.288-.675-2.143-15.385 9.454 1.208v-8.545L1124.995 566z"></path><path fill="#f5a02d" d="M1818.568 820.096c-4.224 21.679-15.302 40.442-26.587 58.942-3.167 5.192-9.389 8.582-14.381 12.585l-26.908 21.114c-1.425 1.104-3.042 2.345-4.732 2.662-11.786 2.214-21.99 8.201-32.586 13.273-.705.338-1.624.231-2.824.334a824.35 824.35 0 0 1-8.262-42.708c4.646-2.14 9.353-3.139 13.269-5.47 5.582-3.323 11.318-6.942 15.671-11.652 7.949-8.6 14.423-18.572 22.456-27.081 8.539-9.046 13.867-19.641 18.325-30.922l46.559 8.922z"></path><path fill="#eb5a57" d="M1124.96 565.639c-5.086-4.017-10.208-8.395-15.478-12.901v8.545l-9.454-1.208 2.143 15.385-1.288.675-6.34-4.04c.716 6.118 1.375 11.74 1.745 17.633-4.564-6.051-9.544-11.649-10.663-20.025-.954-7.141-4.892-13.843-7.121-20.863-3.344-10.533-5.421-21.57-9.732-31.669-5.181-12.135-3.506-25.125-6.728-37.355-2.099-7.968-5.317-15.646-7.324-23.632-1.353-5.384-1.47-11.078-2.429-16.909l-3.294-46.689a278.63 278.63 0 0 1 27.57-2.084c2.114 12.378 3.647 24.309 5.479 36.195 1.25 8.111 2.832 16.175 4.422 24.23 1.402 7.103 2.991 14.169 4.55 21.241 1.478 6.706.273 14.002 4.6 20.088 5.401 7.597 7.176 16.518 9.467 25.337 1.953 7.515 5.804 14.253 11.917 19.406.254 10.095 3.355 19.392 7.96 28.639z"></path><path fill="#ea541c" d="M911.651 810.999c-2.511 10.165-5.419 20.146-8.2 30.162-2.503 9.015-7.37 16.277-14.364 22.612-6.108 5.533-10.917 12.475-16.796 18.293-6.942 6.871-14.354 13.24-19.083 22.03-.644 1.196-2.222 1.889-3.705 2.857-2.39-7.921-4.101-15.991-6.566-23.823-5.451-17.323-12.404-33.976-23.414-48.835l21.627-21.095c3.182-3.29 5.532-7.382 8.295-11.083l10.663-14.163c9.528 4.78 18.925 9.848 28.625 14.247 7.324 3.321 15.036 5.785 22.917 8.799z"></path><path fill="#eb5d19" d="M1284.092 191.421c4.557.69 9.107 1.587 13.51 2.957 18.901 5.881 36.844 13.904 54.031 23.767 4.938 2.834 10.923 3.792 16.046 6.37 6.757 3.399 13.224 7.408 19.659 11.405l27.644 17.587c10.723 6.446 19.392 14.748 26.063 25.376 4.299 6.848 9.463 13.147 14.011 19.847 1.254 1.847 1.696 4.246 2.498 6.396l7.441 20.332c-11.685 1.754-23.379 3.133-35.533 4.037-.737-2.093-.995-3.716-1.294-5.33-3.157-17.057-14.048-30.161-23.034-44.146-3.027-4.71-7.786-8.529-12.334-11.993-9.346-7.116-19.004-13.834-28.688-20.491-6.653-4.573-13.311-9.251-20.431-13.002-8.048-4.24-16.479-7.85-24.989-11.091-11.722-4.465-23.673-8.328-35.527-12.449l.927-19.572z"></path><path fill="#eb5e24" d="M1283.09 211.415c11.928 3.699 23.88 7.562 35.602 12.027 8.509 3.241 16.941 6.852 24.989 11.091 7.12 3.751 13.778 8.429 20.431 13.002 9.684 6.657 19.342 13.375 28.688 20.491 4.548 3.463 9.307 7.283 12.334 11.993 8.986 13.985 19.877 27.089 23.034 44.146.299 1.615.557 3.237.836 5.263-13.373-.216-26.749-.839-40.564-1.923-2.935-9.681-4.597-18.92-12.286-26.152-15.577-14.651-30.4-30.102-45.564-45.193-.686-.683-1.626-1.156-2.516-1.584l-47.187-22.615 2.203-20.546z"></path><path fill="#e9511f" d="M913 486.001c-1.29.915-3.105 1.543-3.785 2.791-4.323 7.929-9.709 14.906-16.906 20.439-1.04.8-2.103 1.755-2.729 2.877-4.35 7.788-11.022 12.482-18.741 16.889-7.562 4.317-13.658 11.199-20.407 16.944-1.503 1.28-2.856 3.009-4.604 3.681-6.881 2.643-13.893 4.948-21.262 7.377-.128-11.151.202-22.302.378-33.454.03-1.892-.6-3.795-.456-6.12 13.727-1.755 23.588-9.527 33.278-17.663 2.784-2.337 6.074-4.161 8.529-6.784l29.057-31.86c1.545-1.71 3.418-3.401 4.221-5.459 5.665-14.509 11.49-28.977 16.436-43.736 2.817-8.407 4.074-17.338 6.033-26.032 5.039.714 10.078 1.427 15.536 2.629-.909 8.969-2.31 17.438-3.546 25.931-2.41 16.551-5.84 32.839-11.991 48.461L913 486.001z"></path><path fill="#ea5741" d="M1179.451 903.828c-14.224-5.787-27.726-12.171-37.235-24.849-5.841-7.787-12.09-15.436-19.146-22.099-7.259-6.854-12.136-14.667-15.035-24.049-1.748-5.654-3.938-11.171-6.254-17.033 15.099-4.009 30.213-8.629 44.958-15.533l28.367 36.36c6.09 8.015 13.124 14.75 22.72 18.375-7.404 14.472-13.599 29.412-17.48 45.244-.271 1.106-.382 2.25-.895 3.583z"></path><path fill="#ea522a" d="M913.32 486.141c2.693-7.837 5.694-15.539 8.722-23.231 6.151-15.622 9.581-31.91 11.991-48.461l3.963-25.861c7.582.317 15.168 1.031 22.748 1.797 4.171.421 8.333.928 12.877 1.596-.963 11.836-.398 24.125-4.102 34.953-5.244 15.33-6.794 31.496-12.521 46.578-2.692 7.09-4.849 14.445-8.203 21.206-4.068 8.201-9.311 15.81-13.708 23.86-1.965 3.597-3.154 7.627-4.609 11.492-1.385 3.68-3.666 6.265-8.114 6.89-1.994-1.511-3.624-3.059-5.077-4.44l6.907-12.799c1.313-2.461.993-4.318-1.813-5.601-2.849-1.302-3.66-3.563-2.413-6.544 1.401-3.35 2.599-6.788 4.069-10.106 1.558-3.517 1.071-5.948-2.833-7.439-2.617-1-5.049-2.484-7.884-3.892z"></path><path fill="#eb5e24" d="M376.574 714.118c12.053 6.538 20.723 16.481 29.081 26.814 1.945 2.404 4.537 4.352 7.047 6.218 8.24 6.125 10.544 15.85 14.942 24.299.974 1.871 1.584 3.931 2.376 6.29-7.145 3.719-14.633 6.501-21.386 10.517-9.606 5.713-18.673 12.334-28.425 18.399-3.407-3.73-6.231-7.409-9.335-10.834l-30.989-33.862c11.858-11.593 22.368-24.28 31.055-38.431 1.86-3.031 3.553-6.164 5.632-9.409z"></path><path fill="#e95514" d="M859.962 787.636c-3.409 5.037-6.981 9.745-10.516 14.481-2.763 3.701-5.113 7.792-8.295 11.083-6.885 7.118-14.186 13.834-21.65 20.755-13.222-17.677-29.417-31.711-48.178-42.878-.969-.576-2.068-.934-3.27-1.709 6.28-8.159 12.733-15.993 19.16-23.849 1.459-1.783 2.718-3.738 4.254-5.448l18.336-19.969c4.909 5.34 9.619 10.738 14.081 16.333 9.72 12.19 21.813 21.566 34.847 29.867.411.262.725.674 1.231 1.334z"></path><path fill="#eb5f2d" d="M339.582 762.088l31.293 33.733c3.104 3.425 5.928 7.104 9.024 10.979-12.885 11.619-24.548 24.139-33.899 38.704-.872 1.359-1.56 2.837-2.644 4.428-6.459-4.271-12.974-8.294-18.644-13.278-4.802-4.221-8.722-9.473-12.862-14.412l-17.921-21.896c-.403-.496-.595-1.163-.926-2.105 16.738-10.504 32.58-21.87 46.578-36.154z"></path><path fill="#f28d00" d="M678.388 332.912c1.989-5.104 3.638-10.664 6.876-15.051 8.903-12.064 17.596-24.492 28.013-35.175 11.607-11.904 25.007-22.064 40.507-29.592 4.873 11.636 9.419 23.412 13.67 35.592-5.759 4.084-11.517 7.403-16.594 11.553-4.413 3.607-8.124 8.092-12.023 12.301-5.346 5.772-10.82 11.454-15.782 17.547-3.929 4.824-7.17 10.208-10.716 15.344l-33.95-12.518z"></path><path fill="#f08369" d="M1580.181 771.427c-.191-.803-.322-1.377-.119-1.786 5.389-10.903 9.084-22.666 18.181-31.587 6.223-6.103 11.276-13.385 17.286-19.727 3.117-3.289 6.933-6.105 10.869-8.384 6.572-3.806 13.492-7.009 20.461-10.752 1.773 3.23 3.236 6.803 4.951 10.251l12.234 24.993c-1.367 1.966-2.596 3.293-3.935 4.499-7.845 7.07-16.315 13.564-23.407 21.32-6.971 7.623-12.552 16.517-18.743 24.854l-37.777-13.68z"></path><path fill="#f18b5e" d="M1618.142 785.4c6.007-8.63 11.588-17.524 18.559-25.147 7.092-7.755 15.562-14.249 23.407-21.32 1.338-1.206 2.568-2.534 3.997-4.162l28.996 33.733c1.896 2.205 4.424 3.867 6.66 6.394-6.471 7.492-12.967 14.346-19.403 21.255l-18.407 19.953c-12.958-12.409-27.485-22.567-43.809-30.706z"></path><path fill="#f49c3a" d="M1771.617 811.1c-4.066 11.354-9.394 21.949-17.933 30.995-8.032 8.509-14.507 18.481-22.456 27.081-4.353 4.71-10.089 8.329-15.671 11.652-3.915 2.331-8.623 3.331-13.318 5.069-4.298-9.927-8.255-19.998-12.1-30.743 4.741-4.381 9.924-7.582 13.882-11.904 7.345-8.021 14.094-16.603 20.864-25.131 4.897-6.168 9.428-12.626 14.123-18.955l32.61 11.936z"></path><path fill="#f08000" d="M712.601 345.675c3.283-5.381 6.524-10.765 10.453-15.589 4.962-6.093 10.435-11.774 15.782-17.547 3.899-4.21 7.61-8.695 12.023-12.301 5.078-4.15 10.836-7.469 16.636-11.19a934.12 934.12 0 0 1 23.286 35.848c-4.873 6.234-9.676 11.895-14.63 17.421l-25.195 27.801c-11.713-9.615-24.433-17.645-38.355-24.443z"></path><path fill="#ed6e04" d="M751.11 370.42c8.249-9.565 16.693-18.791 25.041-28.103 4.954-5.526 9.757-11.187 14.765-17.106 7.129 6.226 13.892 13.041 21.189 19.225 5.389 4.567 11.475 8.312 17.53 12.92-5.51 7.863-10.622 15.919-17.254 22.427-8.881 8.716-18.938 16.233-28.49 24.264-5.703-6.587-11.146-13.427-17.193-19.682-4.758-4.921-10.261-9.121-15.587-13.944z"></path><path fill="#ea541c" d="M921.823 385.544c-1.739 9.04-2.995 17.971-5.813 26.378-4.946 14.759-10.771 29.227-16.436 43.736-.804 2.058-2.676 3.749-4.221 5.459l-29.057 31.86c-2.455 2.623-5.745 4.447-8.529 6.784-9.69 8.135-19.551 15.908-33.208 17.237-1.773-9.728-3.147-19.457-4.091-29.6l36.13-16.763c.581-.267 1.046-.812 1.525-1.269 8.033-7.688 16.258-15.19 24.011-23.152 4.35-4.467 9.202-9.144 11.588-14.69 6.638-15.425 15.047-30.299 17.274-47.358 3.536.344 7.072.688 10.829 1.377z"></path><path fill="#f3944d" d="M1738.688 798.998c-4.375 6.495-8.906 12.953-13.803 19.121-6.771 8.528-13.519 17.11-20.864 25.131-3.958 4.322-9.141 7.523-13.925 11.54-8.036-13.464-16.465-26.844-27.999-38.387 5.988-6.951 12.094-13.629 18.261-20.25l19.547-20.95 38.783 23.794z"></path><path fill="#ec6168" d="M1239.583 703.142c3.282 1.805 6.441 3.576 9.217 5.821 5.88 4.755 11.599 9.713 17.313 14.669l22.373 19.723c1.781 1.622 3.2 3.806 4.307 5.975 3.843 7.532 7.477 15.171 11.194 23.136-10.764 4.67-21.532 8.973-32.69 12.982l-22.733-27.366c-2.003-2.416-4.096-4.758-6.194-7.093-3.539-3.94-6.927-8.044-10.74-11.701-2.57-2.465-5.762-4.283-8.675-6.39l16.627-29.755z"></path><path fill="#ec663e" d="M1351.006 332.839l-28.499 10.33c-.294.107-.533.367-1.194.264-11.067-19.018-27.026-32.559-44.225-44.855-4.267-3.051-8.753-5.796-13.138-8.682l9.505-24.505c10.055 4.069 19.821 8.227 29.211 13.108 3.998 2.078 7.299 5.565 10.753 8.598 3.077 2.701 5.743 5.891 8.926 8.447 4.116 3.304 9.787 5.345 12.62 9.432 6.083 8.777 10.778 18.517 16.041 27.863z"></path><path fill="#eb5e5b" d="M1222.647 733.051c3.223 1.954 6.415 3.771 8.985 6.237 3.813 3.658 7.201 7.761 10.74 11.701l6.194 7.093 22.384 27.409c-13.056 6.836-25.309 14.613-36.736 24.161l-39.323-44.7 24.494-27.846c1.072-1.224 1.974-2.598 3.264-4.056z"></path><path fill="#ea580e" d="M876.001 376.171c5.874 1.347 11.748 2.694 17.812 4.789-.81 5.265-2.687 9.791-2.639 14.296.124 11.469-4.458 20.383-12.73 27.863-2.075 1.877-3.659 4.286-5.668 6.248l-22.808 21.967c-.442.422-1.212.488-1.813.757l-23.113 10.389-9.875 4.514c-2.305-6.09-4.609-12.181-6.614-18.676 7.64-4.837 15.567-8.54 22.18-13.873 9.697-7.821 18.931-16.361 27.443-25.455 5.613-5.998 12.679-11.331 14.201-20.475.699-4.2 2.384-8.235 3.623-12.345z"></path><path fill="#e95514" d="M815.103 467.384c3.356-1.894 6.641-3.415 9.94-4.903l23.113-10.389c.6-.269 1.371-.335 1.813-.757l22.808-21.967c2.008-1.962 3.593-4.371 5.668-6.248 8.272-7.48 12.854-16.394 12.73-27.863-.049-4.505 1.828-9.031 2.847-13.956 5.427.559 10.836 1.526 16.609 2.68-1.863 17.245-10.272 32.119-16.91 47.544-2.387 5.546-7.239 10.223-11.588 14.69-7.753 7.962-15.978 15.464-24.011 23.152-.478.458-.944 1.002-1.525 1.269l-36.069 16.355c-2.076-6.402-3.783-12.81-5.425-19.607z"></path><path fill="#eb620b" d="M783.944 404.402c9.499-8.388 19.556-15.905 28.437-24.621 6.631-6.508 11.744-14.564 17.575-22.273 9.271 4.016 18.501 8.375 27.893 13.43-4.134 7.07-8.017 13.778-12.833 19.731-5.785 7.15-12.109 13.917-18.666 20.376-7.99 7.869-16.466 15.244-24.731 22.832l-17.674-29.475z"></path><path fill="#ea544c" d="M1197.986 854.686c-9.756-3.309-16.79-10.044-22.88-18.059l-28.001-36.417c8.601-5.939 17.348-11.563 26.758-17.075 1.615 1.026 2.639 1.876 3.505 2.865l26.664 30.44c3.723 4.139 7.995 7.785 12.017 11.656l-18.064 26.591z"></path><path fill="#ec6333" d="M1351.41 332.903c-5.667-9.409-10.361-19.149-16.445-27.926-2.833-4.087-8.504-6.128-12.62-9.432-3.184-2.555-5.849-5.745-8.926-8.447-3.454-3.033-6.756-6.52-10.753-8.598-9.391-4.88-19.157-9.039-29.138-13.499 1.18-5.441 2.727-10.873 4.81-16.607 11.918 4.674 24.209 8.261 34.464 14.962 14.239 9.304 29.011 18.453 39.595 32.464 2.386 3.159 5.121 6.077 7.884 8.923 6.564 6.764 10.148 14.927 11.723 24.093l-20.594 4.067z"></path><path fill="#eb5e5b" d="M1117 536.549c-6.113-4.702-9.965-11.44-11.917-18.955-2.292-8.819-4.066-17.74-9.467-25.337-4.327-6.085-3.122-13.382-4.6-20.088l-4.55-21.241c-1.59-8.054-3.172-16.118-4.422-24.23l-5.037-36.129c6.382-1.43 12.777-2.462 19.582-3.443 1.906 11.646 3.426 23.24 4.878 34.842.307 2.453.717 4.973.477 7.402-1.86 18.84 2.834 36.934 5.347 55.352 1.474 10.806 4.885 20.848 7.101 31.302 1.394 6.579 1.774 13.374 2.609 20.523z"></path><path fill="#ec644b" d="M1263.638 290.071c4.697 2.713 9.183 5.458 13.45 8.509 17.199 12.295 33.158 25.836 43.873 44.907-8.026 4.725-16.095 9.106-24.83 13.372-11.633-15.937-25.648-28.515-41.888-38.689-1.609-1.008-3.555-1.48-5.344-2.2 2.329-3.852 4.766-7.645 6.959-11.573l7.78-14.326z"></path><path fill="#eb5f2d" d="M1372.453 328.903c-2.025-9.233-5.608-17.396-12.172-24.16-2.762-2.846-5.498-5.764-7.884-8.923-10.584-14.01-25.356-23.16-39.595-32.464-10.256-6.701-22.546-10.289-34.284-15.312.325-5.246 1.005-10.444 2.027-15.863l47.529 22.394c.89.428 1.83.901 2.516 1.584l45.564 45.193c7.69 7.233 9.352 16.472 11.849 26.084-5.032.773-10.066 1.154-15.55 1.466z"></path><path fill="#e95a0f" d="M801.776 434.171c8.108-7.882 16.584-15.257 24.573-23.126 6.558-6.459 12.881-13.226 18.666-20.376 4.817-5.953 8.7-12.661 13.011-19.409 5.739 1.338 11.463 3.051 17.581 4.838-.845 4.183-2.53 8.219-3.229 12.418-1.522 9.144-8.588 14.477-14.201 20.475-8.512 9.094-17.745 17.635-27.443 25.455-6.613 5.333-14.54 9.036-22.223 13.51-2.422-4.469-4.499-8.98-6.735-13.786z"></path><path fill="#eb5e5b" d="M1248.533 316.002c2.155.688 4.101 1.159 5.71 2.168 16.24 10.174 30.255 22.752 41.532 38.727-7.166 5.736-14.641 11.319-22.562 16.731-1.16-1.277-1.684-2.585-2.615-3.46l-38.694-36.2 14.203-15.029c.803-.86 1.38-1.93 2.427-2.936z"></path><path fill="#eb5a57" d="M1216.359 827.958c-4.331-3.733-8.603-7.379-12.326-11.518l-26.664-30.44c-.866-.989-1.89-1.839-3.152-2.902 6.483-6.054 13.276-11.959 20.371-18.005l39.315 44.704c-5.648 6.216-11.441 12.12-17.544 18.161z"></path><path fill="#ec6168" d="M1231.598 334.101l38.999 36.066c.931.876 1.456 2.183 2.303 3.608-4.283 4.279-8.7 8.24-13.769 12.091-4.2-3.051-7.512-6.349-11.338-8.867-12.36-8.136-22.893-18.27-32.841-29.093l16.646-13.805z"></path><path fill="#ed656e" d="M1214.597 347.955c10.303 10.775 20.836 20.908 33.196 29.044 3.825 2.518 7.137 5.816 10.992 8.903-3.171 4.397-6.65 8.648-10.432 13.046-6.785-5.184-13.998-9.858-19.529-16.038-4.946-5.527-9.687-8.644-17.309-8.215-2.616.147-5.734-2.788-8.067-4.923-3.026-2.769-5.497-6.144-8.35-9.568 6.286-4.273 12.715-8.237 19.499-12.25z"></path></svg> </p> <p align="center"> <b>The crispy rerank family from <a href="https://mixedbread.ai"><b>Mixedbread</b></a>.</b> </p> # mxbai-rerank-xsmall-v1 This is the smallest model in our family of powerful reranker models. You can learn more about the models in our [blog post](https://www.mixedbread.ai/blog/mxbai-rerank-v1). We have three models: - [mxbai-rerank-xsmall-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-xsmall-v1) (🍞) - [mxbai-rerank-base-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-base-v1) - [mxbai-rerank-large-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v1) ## Quickstart Currently, the best way to use our models is with the most recent version of sentence-transformers. `pip install -U sentence-transformers` Let's say you have a query, and you want to rerank a set of documents. You can do that with only one line of code: ```python from sentence_transformers import CrossEncoder # Load the model, here we use our base sized model model = CrossEncoder("mixedbread-ai/mxbai-rerank-xsmall-v1") # Example query and documents query = "Who wrote 'To Kill a Mockingbird'?" documents = [ "'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.", "The novel 'Moby-Dick' was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil.", "Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.", "Jane Austen was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century.", "The 'Harry Potter' series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era.", "'The Great Gatsby', a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan." ] # Lets get the scores results = model.rank(query, documents, return_documents=True, top_k=3) ``` <details> <summary>JavaScript Example</summary> Install [transformers.js](https://github.com/xenova/transformers.js) `npm i @xenova/transformers` Let's say you have a query, and you want to rerank a set of documents. In JavaScript, you need to add a function: ```javascript import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers'; const model_id = 'mixedbread-ai/mxbai-rerank-xsmall-v1'; const model = await AutoModelForSequenceClassification.from_pretrained(model_id); const tokenizer = await AutoTokenizer.from_pretrained(model_id); /** * Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores. * @param {string} query A single query * @param {string[]} documents A list of documents * @param {Object} options Options for ranking * @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned. * @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores. */ async function rank(query, documents, { top_k = undefined, return_documents = false, } = {}) { const inputs = tokenizer( new Array(documents.length).fill(query), { text_pair: documents, padding: true, truncation: true, } ) const { logits } = await model(inputs); return logits .sigmoid() .tolist() .map(([score], i) => ({ corpus_id: i, score, ...(return_documents ? { text: documents[i] } : {}) })) .sort((a, b) => b.score - a.score) .slice(0, top_k); } // Example usage: const query = "Who wrote 'To Kill a Mockingbird'?" const documents = [ "'To Kill a Mockingbird' is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.", "The novel 'Moby-Dick' was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil.", "Harper Lee, an American novelist widely known for her novel 'To Kill a Mockingbird', was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.", "Jane Austen was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century.", "The 'Harry Potter' series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era.", "'The Great Gatsby', a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan." ] const results = await rank(query, documents, { return_documents: true, top_k: 3 }); console.log(results); ``` </details> ## Using API You can use the large model via our API as follows: ```python from mixedbread_ai.client import MixedbreadAI mxbai = MixedbreadAI(api_key="{MIXEDBREAD_API_KEY}") res = mxbai.reranking( model="mixedbread-ai/mxbai-rerank-large-v1", query="Who is the author of To Kill a Mockingbird?", input=[ "To Kill a Mockingbird is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize, and has become a classic of modern American literature.", "The novel Moby-Dick was written by Herman Melville and first published in 1851. It is considered a masterpiece of American literature and deals with complex themes of obsession, revenge, and the conflict between good and evil.", "Harper Lee, an American novelist widely known for her novel To Kill a Mockingbird, was born in 1926 in Monroeville, Alabama. She received the Pulitzer Prize for Fiction in 1961.", "Jane Austen was an English novelist known primarily for her six major novels, which interpret, critique and comment upon the British landed gentry at the end of the 18th century.", "The Harry Potter series, which consists of seven fantasy novels written by British author J.K. Rowling, is among the most popular and critically acclaimed books of the modern era.", "The Great Gatsby, a novel written by American author F. Scott Fitzgerald, was published in 1925. The story is set in the Jazz Age and follows the life of millionaire Jay Gatsby and his pursuit of Daisy Buchanan." ], top_k=3, return_input=false ) print(res.data) ``` The API comes with additional features, such as a continous trained reranker! Check out the [docs](https://www.mixedbread.ai/docs) for more information. ## Evaluation Our reranker models are designed to elevate your search. They work extremely well in combination with keyword search and can even outperform semantic search systems in many cases. | Model | NDCG@10 | Accuracy@3 | | ------------------------------------------------------------------------------------- | -------- | ---------- | | Lexical Search (Lucene) | 38.0 | 66.4 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 41.6 | 66.9 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 45.2 | 70.6 | | cohere-embed-v3 (semantic search) | 47.5 | 70.9 | | [mxbai-rerank-xsmall-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-xsmall-v1) | **43.9** | **70.0** | | [mxbai-rerank-base-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-base-v1) | **46.9** | **72.3** | | [mxbai-rerank-large-v1](https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v1) | **48.8** | **74.9** | The reported results are aggregated from 11 datasets of BEIR. We used [Pyserini](https://github.com/castorini/pyserini/) to evaluate the models. Find more in our [blog-post](https://www.mixedbread.ai/blog/mxbai-rerank-v1) and on this [spreadsheet](https://docs.google.com/spreadsheets/d/15ELkSMFv-oHa5TRiIjDvhIstH9dlc3pnZeO-iGz4Ld4/edit?usp=sharing). ## Community Please join our [Discord Community](https://discord.gg/jDfMHzAVfU) and share your feedback and thoughts! We are here to help and also always happy to chat. ## License Apache 2.0
MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33
MoritzLaurer
"2024-06-03T10:27:59Z"
59,447
21
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2312.17543", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2023-11-23T22:22:02Z"
--- language: - en tags: - text-classification - zero-shot-classification pipeline_tag: zero-shot-classification library_name: transformers license: mit --- # Model description: deberta-v3-base-zeroshot-v1.1-all-33 The model is designed for zero-shot classification with the Hugging Face pipeline. The model can do one universal classification task: determine whether a hypothesis is "true" or "not true" given a text (`entailment` vs. `not_entailment`). This task format is based on the Natural Language Inference task (NLI). The task is so universal that any classification task can be reformulated into this task. A detailed description of how the model was trained and how it can be used is available in this [paper](https://arxiv.org/pdf/2312.17543.pdf). ## Training data The model was trained on a mixture of __33 datasets and 387 classes__ that have been reformatted into this universal format. 1. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling" 2. 28 classification tasks reformatted into the universal NLI format. ~51k cleaned texts were used to avoid overfitting: 'amazonpolarity', 'imdb', 'appreviews', 'yelpreviews', 'rottentomatoes', 'emotiondair', 'emocontext', 'empathetic', 'financialphrasebank', 'banking77', 'massive', 'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate', 'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent', 'agnews', 'yahootopics', 'trueteacher', 'spam', 'wellformedquery', 'manifesto', 'capsotu'. See details on each dataset here: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`) as opposed to three classes (entailment/neutral/contradiction) The model was only trained on English data. For __multilingual use-cases__, I recommend machine translating texts to English with libraries like [EasyNMT](https://github.com/UKPLab/EasyNMT). English-only models tend to perform better than multilingual models and validation with English data can be easier if you don't speak all languages in your corpus. ### How to use the model #### Simple zero-shot classification pipeline ```python #!pip install transformers[sentencepiece] from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" hypothesis_template = "This example is about {}" classes_verbalized = ["politics", "economy", "entertainment", "environment"] zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33") output = zeroshot_classifier(text, classes_verbalized, hypothesis_template=hypothesis_template, multi_label=False) print(output) ``` ### Details on data and training The code for preparing the data and training & evaluating the model is fully open-source here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main Hyperparameters and other details are available in this Weights & Biases repo: https://wandb.ai/moritzlaurer/deberta-v3-base-zeroshot-v1-1-all-33/table?workspace=user- ## Metrics Balanced accuracy is reported for all datasets. `deberta-v3-base-zeroshot-v1.1-all-33` was trained on all datasets, with only maximum 500 texts per class to avoid overfitting. The metrics on these datasets are therefore not strictly zeroshot, as the model has seen some data for each task during training. `deberta-v3-base-zeroshot-v1.1-heldout` indicates zeroshot performance on the respective dataset. To calculate these zeroshot metrics, the pipeline was run 28 times, each time with one dataset held out from training to simulate a zeroshot setup. ![figure_base_v1.1](https://raw.githubusercontent.com/MoritzLaurer/zeroshot-classifier/main/results/fig_base_v1.1.png) | | deberta-v3-base-mnli-fever-anli-ling-wanli-binary | deberta-v3-base-zeroshot-v1.1-heldout | deberta-v3-base-zeroshot-v1.1-all-33 | |:---------------------------|---------------------------:|----------------------------------------:|---------------------------------------:| | datasets mean (w/o nli) | 62 | 70.7 | 84 | | amazonpolarity (2) | 91.7 | 95.7 | 96 | | imdb (2) | 87.3 | 93.6 | 94.5 | | appreviews (2) | 91.3 | 92.2 | 94.4 | | yelpreviews (2) | 95.1 | 97.4 | 98.3 | | rottentomatoes (2) | 83 | 88.7 | 90.8 | | emotiondair (6) | 46.5 | 42.6 | 74.5 | | emocontext (4) | 58.5 | 57.4 | 81.2 | | empathetic (32) | 31.3 | 37.3 | 52.7 | | financialphrasebank (3) | 78.3 | 68.9 | 91.2 | | banking77 (72) | 18.9 | 46 | 73.7 | | massive (59) | 44 | 56.6 | 78.9 | | wikitoxic_toxicaggreg (2) | 73.7 | 82.5 | 90.5 | | wikitoxic_obscene (2) | 77.3 | 91.6 | 92.6 | | wikitoxic_threat (2) | 83.5 | 95.2 | 96.7 | | wikitoxic_insult (2) | 79.6 | 91 | 91.6 | | wikitoxic_identityhate (2) | 83.9 | 88 | 94.4 | | hateoffensive (3) | 55.2 | 66.1 | 86 | | hatexplain (3) | 44.1 | 57.6 | 76.9 | | biasframes_offensive (2) | 56.8 | 85.4 | 87 | | biasframes_sex (2) | 85.4 | 87 | 91.8 | | biasframes_intent (2) | 56.3 | 85.2 | 87.8 | | agnews (4) | 77.3 | 80 | 90.5 | | yahootopics (10) | 53.6 | 57.7 | 72.8 | | trueteacher (2) | 51.4 | 49.5 | 82.4 | | spam (2) | 51.8 | 50 | 97.2 | | wellformedquery (2) | 49.9 | 52.5 | 77.2 | | manifesto (56) | 5.8 | 18.9 | 39.1 | | capsotu (21) | 25.2 | 64 | 72.5 | | mnli_m (2) | 92.4 | nan | 92.7 | | mnli_mm (2) | 92.4 | nan | 92.5 | | fevernli (2) | 89 | nan | 89.1 | | anli_r1 (2) | 79.4 | nan | 80 | | anli_r2 (2) | 68.4 | nan | 68.4 | | anli_r3 (2) | 66.2 | nan | 68 | | wanli (2) | 81.6 | nan | 81.8 | | lingnli (2) | 88.4 | nan | 88.4 | ## Limitations and bias The model can only do text classification tasks. Please consult the original DeBERTa paper and the papers for the different datasets for potential biases. ## License The base model (DeBERTa-v3) is published under the MIT license. The datasets the model was fine-tuned on are published under a diverse set of licenses. The following table provides an overview of the non-NLI datasets used for fine-tuning, information on licenses, the underlying papers etc.: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv ## Citation If you use this model academically, please cite: ``` @misc{laurer_building_2023, title = {Building {Efficient} {Universal} {Classifiers} with {Natural} {Language} {Inference}}, url = {http://arxiv.org/abs/2312.17543}, doi = {10.48550/arXiv.2312.17543}, abstract = {Generative Large Language Models (LLMs) have become the mainstream choice for fewshot and zeroshot learning thanks to the universality of text generation. Many users, however, do not need the broad capabilities of generative LLMs when they only want to automate a classification task. Smaller BERT-like models can also learn universal tasks, which allow them to do any text classification task without requiring fine-tuning (zeroshot classification) or to learn new tasks with only a few examples (fewshot), while being significantly more efficient than generative LLMs. This paper (1) explains how Natural Language Inference (NLI) can be used as a universal classification task that follows similar principles as instruction fine-tuning of generative LLMs, (2) provides a step-by-step guide with reusable Jupyter notebooks for building a universal classifier, and (3) shares the resulting universal classifier that is trained on 33 datasets with 389 diverse classes. Parts of the code we share has been used to train our older zeroshot classifiers that have been downloaded more than 55 million times via the Hugging Face Hub as of December 2023. Our new classifier improves zeroshot performance by 9.4\%.}, urldate = {2024-01-05}, publisher = {arXiv}, author = {Laurer, Moritz and van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper}, month = dec, year = {2023}, note = {arXiv:2312.17543 [cs]}, keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language}, } ``` ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers can have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. Also make sure to install sentencepiece to avoid tokenizer errors. Run: `pip install transformers[sentencepiece]` or `pip install sentencepiece` ### Hypotheses used for classification The hypotheses in the tables below were used to fine-tune the model. Inspecting them can help users get a feeling for which type of hypotheses and tasks the model was trained on. You can formulate your own hypotheses by changing the `hypothesis_template` of the zeroshot pipeline. For example: ```python from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" hypothesis_template = "Merkel is the leader of the party: {}" classes_verbalized = ["CDU", "SPD", "Greens"] zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33") output = zeroshot_classifier(text, classes_verbalised, hypothesis_template=hypothesis_template, multi_label=False) print(output) ``` Note that a few rows in the `massive` and `banking77` datasets contain `nan` because some classes were so ambiguous/unclear that I excluded them from the data. #### wellformedquery | label | hypothesis | |:----------------|:-----------------------------------------------| | not_well_formed | This example is not a well formed Google query | | well_formed | This example is a well formed Google query. | #### biasframes_sex | label | hypothesis | |:--------|:-----------------------------------------------------------| | not_sex | This example does not contain allusions to sexual content. | | sex | This example contains allusions to sexual content. | #### biasframes_intent | label | hypothesis | |:-----------|:-----------------------------------------------------------------| | intent | The intent of this example is to be offensive/disrespectful. | | not_intent | The intent of this example is not to be offensive/disrespectful. | #### biasframes_offensive | label | hypothesis | |:--------------|:-------------------------------------------------------------------------| | not_offensive | This example could not be considered offensive, disrespectful, or toxic. | | offensive | This example could be considered offensive, disrespectful, or toxic. | #### financialphrasebank | label | hypothesis | |:---------|:--------------------------------------------------------------------------| | negative | The sentiment in this example is negative from an investor's perspective. | | neutral | The sentiment in this example is neutral from an investor's perspective. | | positive | The sentiment in this example is positive from an investor's perspective. | #### rottentomatoes | label | hypothesis | |:---------|:-----------------------------------------------------------------------| | negative | The sentiment in this example rotten tomatoes movie review is negative | | positive | The sentiment in this example rotten tomatoes movie review is positive | #### amazonpolarity | label | hypothesis | |:---------|:----------------------------------------------------------------| | negative | The sentiment in this example amazon product review is negative | | positive | The sentiment in this example amazon product review is positive | #### imdb | label | hypothesis | |:---------|:------------------------------------------------------------| | negative | The sentiment in this example imdb movie review is negative | | positive | The sentiment in this example imdb movie review is positive | #### appreviews | label | hypothesis | |:---------|:------------------------------------------------------| | negative | The sentiment in this example app review is negative. | | positive | The sentiment in this example app review is positive. | #### yelpreviews | label | hypothesis | |:---------|:-------------------------------------------------------| | negative | The sentiment in this example yelp review is negative. | | positive | The sentiment in this example yelp review is positive. | #### wikitoxic_toxicaggregated | label | hypothesis | |:--------------------|:----------------------------------------------------------------| | not_toxicaggregated | This example wikipedia comment does not contain toxic language. | | toxicaggregated | This example wikipedia comment contains toxic language. | #### wikitoxic_obscene | label | hypothesis | |:------------|:------------------------------------------------------------------| | not_obscene | This example wikipedia comment does not contain obscene language. | | obscene | This example wikipedia comment contains obscene language. | #### wikitoxic_threat | label | hypothesis | |:-----------|:----------------------------------------------------------| | not_threat | This example wikipedia comment does not contain a threat. | | threat | This example wikipedia comment contains a threat. | #### wikitoxic_insult | label | hypothesis | |:-----------|:-----------------------------------------------------------| | insult | This example wikipedia comment contains an insult. | | not_insult | This example wikipedia comment does not contain an insult. | #### wikitoxic_identityhate | label | hypothesis | |:-----------------|:---------------------------------------------------------------| | identityhate | This example wikipedia comment contains identity hate. | | not_identityhate | This example wikipedia comment does not contain identity hate. | #### hateoffensive | label | hypothesis | |:------------|:------------------------------------------------------------------------| | hate_speech | This example tweet contains hate speech. | | neither | This example tweet contains neither offensive language nor hate speech. | | offensive | This example tweet contains offensive language without hate speech. | #### hatexplain | label | hypothesis | |:------------|:-------------------------------------------------------------------------------------------| | hate_speech | This example text from twitter or gab contains hate speech. | | neither | This example text from twitter or gab contains neither offensive language nor hate speech. | | offensive | This example text from twitter or gab contains offensive language without hate speech. | #### spam | label | hypothesis | |:---------|:------------------------------| | not_spam | This example sms is not spam. | | spam | This example sms is spam. | #### emotiondair | label | hypothesis | |:---------|:---------------------------------------------------| | anger | This example tweet expresses the emotion: anger | | fear | This example tweet expresses the emotion: fear | | joy | This example tweet expresses the emotion: joy | | love | This example tweet expresses the emotion: love | | sadness | This example tweet expresses the emotion: sadness | | surprise | This example tweet expresses the emotion: surprise | #### emocontext | label | hypothesis | |:--------|:--------------------------------------------------------------------------------------| | angry | This example tweet expresses the emotion: anger | | happy | This example tweet expresses the emotion: happiness | | others | This example tweet does not express any of the emotions: anger, sadness, or happiness | | sad | This example tweet expresses the emotion: sadness | #### empathetic | label | hypothesis | |:-------------|:-----------------------------------------------------------| | afraid | The main emotion of this example dialogue is: afraid | | angry | The main emotion of this example dialogue is: angry | | annoyed | The main emotion of this example dialogue is: annoyed | | anticipating | The main emotion of this example dialogue is: anticipating | | anxious | The main emotion of this example dialogue is: anxious | | apprehensive | The main emotion of this example dialogue is: apprehensive | | ashamed | The main emotion of this example dialogue is: ashamed | | caring | The main emotion of this example dialogue is: caring | | confident | The main emotion of this example dialogue is: confident | | content | The main emotion of this example dialogue is: content | | devastated | The main emotion of this example dialogue is: devastated | | disappointed | The main emotion of this example dialogue is: disappointed | | disgusted | The main emotion of this example dialogue is: disgusted | | embarrassed | The main emotion of this example dialogue is: embarrassed | | excited | The main emotion of this example dialogue is: excited | | faithful | The main emotion of this example dialogue is: faithful | | furious | The main emotion of this example dialogue is: furious | | grateful | The main emotion of this example dialogue is: grateful | | guilty | The main emotion of this example dialogue is: guilty | | hopeful | The main emotion of this example dialogue is: hopeful | | impressed | The main emotion of this example dialogue is: impressed | | jealous | The main emotion of this example dialogue is: jealous | | joyful | The main emotion of this example dialogue is: joyful | | lonely | The main emotion of this example dialogue is: lonely | | nostalgic | The main emotion of this example dialogue is: nostalgic | | prepared | The main emotion of this example dialogue is: prepared | | proud | The main emotion of this example dialogue is: proud | | sad | The main emotion of this example dialogue is: sad | | sentimental | The main emotion of this example dialogue is: sentimental | | surprised | The main emotion of this example dialogue is: surprised | | terrified | The main emotion of this example dialogue is: terrified | | trusting | The main emotion of this example dialogue is: trusting | #### agnews | label | hypothesis | |:---------|:-------------------------------------------------------| | Business | This example news text is about business news | | Sci/Tech | This example news text is about science and technology | | Sports | This example news text is about sports | | World | This example news text is about world news | #### yahootopics | label | hypothesis | |:-----------------------|:---------------------------------------------------------------------------------------------------| | Business & Finance | This example question from the Yahoo Q&A forum is categorized in the topic: Business & Finance | | Computers & Internet | This example question from the Yahoo Q&A forum is categorized in the topic: Computers & Internet | | Education & Reference | This example question from the Yahoo Q&A forum is categorized in the topic: Education & Reference | | Entertainment & Music | This example question from the Yahoo Q&A forum is categorized in the topic: Entertainment & Music | | Family & Relationships | This example question from the Yahoo Q&A forum is categorized in the topic: Family & Relationships | | Health | This example question from the Yahoo Q&A forum is categorized in the topic: Health | | Politics & Government | This example question from the Yahoo Q&A forum is categorized in the topic: Politics & Government | | Science & Mathematics | This example question from the Yahoo Q&A forum is categorized in the topic: Science & Mathematics | | Society & Culture | This example question from the Yahoo Q&A forum is categorized in the topic: Society & Culture | | Sports | This example question from the Yahoo Q&A forum is categorized in the topic: Sports | #### massive | label | hypothesis | |:-------------------------|:------------------------------------------------------------------------------------------| | alarm_query | The example utterance is a query about alarms. | | alarm_remove | The intent of this example utterance is to remove an alarm. | | alarm_set | The intent of the example utterance is to set an alarm. | | audio_volume_down | The intent of the example utterance is to lower the volume. | | audio_volume_mute | The intent of this example utterance is to mute the volume. | | audio_volume_other | The example utterance is related to audio volume. | | audio_volume_up | The intent of this example utterance is turning the audio volume up. | | calendar_query | The example utterance is a query about a calendar. | | calendar_remove | The intent of the example utterance is to remove something from a calendar. | | calendar_set | The intent of this example utterance is to set something in a calendar. | | cooking_query | The example utterance is a query about cooking. | | cooking_recipe | This example utterance is about cooking recipies. | | datetime_convert | The example utterance is related to date time changes or conversion. | | datetime_query | The intent of this example utterance is a datetime query. | | email_addcontact | The intent of this example utterance is adding an email address to contacts. | | email_query | The example utterance is a query about emails. | | email_querycontact | The intent of this example utterance is to query contact details. | | email_sendemail | The intent of the example utterance is to send an email. | | general_greet | This example utterance is a general greet. | | general_joke | The intent of the example utterance is to hear a joke. | | general_quirky | nan | | iot_cleaning | The intent of the example utterance is for an IoT device to start cleaning. | | iot_coffee | The intent of this example utterance is for an IoT device to make coffee. | | iot_hue_lightchange | The intent of this example utterance is changing the light. | | iot_hue_lightdim | The intent of the example utterance is to dim the lights. | | iot_hue_lightoff | The example utterance is related to turning the lights off. | | iot_hue_lighton | The example utterance is related to turning the lights on. | | iot_hue_lightup | The intent of this example utterance is to brighten lights. | | iot_wemo_off | The intent of this example utterance is turning an IoT device off. | | iot_wemo_on | The intent of the example utterance is to turn an IoT device on. | | lists_createoradd | The example utterance is related to creating or adding to lists. | | lists_query | The example utterance is a query about a list. | | lists_remove | The intent of this example utterance is to remove a list or remove something from a list. | | music_dislikeness | The intent of this example utterance is signalling music dislike. | | music_likeness | The example utterance is related to liking music. | | music_query | The example utterance is a query about music. | | music_settings | The intent of the example utterance is to change music settings. | | news_query | The example utterance is a query about the news. | | play_audiobook | The example utterance is related to playing audiobooks. | | play_game | The intent of this example utterance is to start playing a game. | | play_music | The intent of this example utterance is for an IoT device to play music. | | play_podcasts | The example utterance is related to playing podcasts. | | play_radio | The intent of the example utterance is to play something on the radio. | | qa_currency | This example utteranceis about currencies. | | qa_definition | The example utterance is a query about a definition. | | qa_factoid | The example utterance is a factoid question. | | qa_maths | The example utterance is a question about maths. | | qa_stock | This example utterance is about stocks. | | recommendation_events | This example utterance is about event recommendations. | | recommendation_locations | The intent of this example utterance is receiving recommendations for good locations. | | recommendation_movies | This example utterance is about movie recommendations. | | social_post | The example utterance is about social media posts. | | social_query | The example utterance is a query about a social network. | | takeaway_order | The intent of this example utterance is to order takeaway food. | | takeaway_query | This example utterance is about takeaway food. | | transport_query | The example utterance is a query about transport or travels. | | transport_taxi | The intent of this example utterance is to get a taxi. | | transport_ticket | This example utterance is about transport tickets. | | transport_traffic | This example utterance is about transport or traffic. | | weather_query | This example utterance is a query about the wheather. | #### banking77 | label | hypothesis | |:-------------------------------------------------|:----------------------------------------------------------------------------------------------------------| | Refund_not_showing_up | This customer example message is about a refund not showing up. | | activate_my_card | This banking customer example message is about activating a card. | | age_limit | This banking customer example message is related to age limits. | | apple_pay_or_google_pay | This banking customer example message is about apple pay or google pay | | atm_support | This banking customer example message requests ATM support. | | automatic_top_up | This banking customer example message is about automatic top up. | | balance_not_updated_after_bank_transfer | This banking customer example message is about a balance not updated after a transfer. | | balance_not_updated_after_cheque_or_cash_deposit | This banking customer example message is about a balance not updated after a cheque or cash deposit. | | beneficiary_not_allowed | This banking customer example message is related to a beneficiary not being allowed or a failed transfer. | | cancel_transfer | This banking customer example message is related to the cancellation of a transfer. | | card_about_to_expire | This banking customer example message is related to the expiration of a card. | | card_acceptance | This banking customer example message is related to the scope of acceptance of a card. | | card_arrival | This banking customer example message is about the arrival of a card. | | card_delivery_estimate | This banking customer example message is about a card delivery estimate or timing. | | card_linking | nan | | card_not_working | This banking customer example message is about a card not working. | | card_payment_fee_charged | This banking customer example message is about a card payment fee. | | card_payment_not_recognised | This banking customer example message is about a payment the customer does not recognise. | | card_payment_wrong_exchange_rate | This banking customer example message is about a wrong exchange rate. | | card_swallowed | This banking customer example message is about a card swallowed by a machine. | | cash_withdrawal_charge | This banking customer example message is about a cash withdrawal charge. | | cash_withdrawal_not_recognised | This banking customer example message is about an unrecognised cash withdrawal. | | change_pin | This banking customer example message is about changing a pin code. | | compromised_card | This banking customer example message is about a compromised card. | | contactless_not_working | This banking customer example message is about contactless not working | | country_support | This banking customer example message is about country-specific support. | | declined_card_payment | This banking customer example message is about a declined card payment. | | declined_cash_withdrawal | This banking customer example message is about a declined cash withdrawal. | | declined_transfer | This banking customer example message is about a declined transfer. | | direct_debit_payment_not_recognised | This banking customer example message is about an unrecognised direct debit payment. | | disposable_card_limits | This banking customer example message is about the limits of disposable cards. | | edit_personal_details | This banking customer example message is about editing personal details. | | exchange_charge | This banking customer example message is about exchange rate charges. | | exchange_rate | This banking customer example message is about exchange rates. | | exchange_via_app | nan | | extra_charge_on_statement | This banking customer example message is about an extra charge. | | failed_transfer | This banking customer example message is about a failed transfer. | | fiat_currency_support | This banking customer example message is about fiat currency support | | get_disposable_virtual_card | This banking customer example message is about getting a disposable virtual card. | | get_physical_card | nan | | getting_spare_card | This banking customer example message is about getting a spare card. | | getting_virtual_card | This banking customer example message is about getting a virtual card. | | lost_or_stolen_card | This banking customer example message is about a lost or stolen card. | | lost_or_stolen_phone | This banking customer example message is about a lost or stolen phone. | | order_physical_card | This banking customer example message is about ordering a card. | | passcode_forgotten | This banking customer example message is about a forgotten passcode. | | pending_card_payment | This banking customer example message is about a pending card payment. | | pending_cash_withdrawal | This banking customer example message is about a pending cash withdrawal. | | pending_top_up | This banking customer example message is about a pending top up. | | pending_transfer | This banking customer example message is about a pending transfer. | | pin_blocked | This banking customer example message is about a blocked pin. | | receiving_money | This banking customer example message is about receiving money. | | request_refund | This banking customer example message is about a refund request. | | reverted_card_payment? | This banking customer example message is about reverting a card payment. | | supported_cards_and_currencies | nan | | terminate_account | This banking customer example message is about terminating an account. | | top_up_by_bank_transfer_charge | nan | | top_up_by_card_charge | This banking customer example message is about the charge for topping up by card. | | top_up_by_cash_or_cheque | This banking customer example message is about topping up by cash or cheque. | | top_up_failed | This banking customer example message is about top up issues or failures. | | top_up_limits | This banking customer example message is about top up limitations. | | top_up_reverted | This banking customer example message is about issues with topping up. | | topping_up_by_card | This banking customer example message is about topping up by card. | | transaction_charged_twice | This banking customer example message is about a transaction charged twice. | | transfer_fee_charged | This banking customer example message is about an issue with a transfer fee charge. | | transfer_into_account | This banking customer example message is about transfers into the customer's own account. | | transfer_not_received_by_recipient | This banking customer example message is about a transfer that has not arrived yet. | | transfer_timing | This banking customer example message is about transfer timing. | | unable_to_verify_identity | This banking customer example message is about an issue with identity verification. | | verify_my_identity | This banking customer example message is about identity verification. | | verify_source_of_funds | This banking customer example message is about the source of funds. | | verify_top_up | This banking customer example message is about verification and top ups | | virtual_card_not_working | This banking customer example message is about a virtual card not working | | visa_or_mastercard | This banking customer example message is about types of bank cards. | | why_verify_identity | This banking customer example message questions why identity verification is necessary. | | wrong_amount_of_cash_received | This banking customer example message is about a wrong amount of cash received. | | wrong_exchange_rate_for_cash_withdrawal | This banking customer example message is about a wrong exchange rate for a cash withdrawal. | #### trueteacher | label | hypothesis | |:-----------------------|:---------------------------------------------------------------------| | factually_consistent | The example summary is factually consistent with the full article. | | factually_inconsistent | The example summary is factually inconsistent with the full article. | #### capsotu | label | hypothesis | |:----------------------|:----------------------------------------------------------------------------------------------------------| | Agriculture | This example text from a US presidential speech is about agriculture | | Civil Rights | This example text from a US presidential speech is about civil rights or minorities or civil liberties | | Culture | This example text from a US presidential speech is about cultural policy | | Defense | This example text from a US presidential speech is about defense or military | | Domestic Commerce | This example text from a US presidential speech is about banking or finance or commerce | | Education | This example text from a US presidential speech is about education | | Energy | This example text from a US presidential speech is about energy or electricity or fossil fuels | | Environment | This example text from a US presidential speech is about the environment or water or waste or pollution | | Foreign Trade | This example text from a US presidential speech is about foreign trade | | Government Operations | This example text from a US presidential speech is about government operations or administration | | Health | This example text from a US presidential speech is about health | | Housing | This example text from a US presidential speech is about community development or housing issues | | Immigration | This example text from a US presidential speech is about migration | | International Affairs | This example text from a US presidential speech is about international affairs or foreign aid | | Labor | This example text from a US presidential speech is about employment or labour | | Law and Crime | This example text from a US presidential speech is about law, crime or family issues | | Macroeconomics | This example text from a US presidential speech is about macroeconomics | | Public Lands | This example text from a US presidential speech is about public lands or water management | | Social Welfare | This example text from a US presidential speech is about social welfare | | Technology | This example text from a US presidential speech is about space or science or technology or communications | | Transportation | This example text from a US presidential speech is about transportation | #### manifesto | label | hypothesis | |:-------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Agriculture and Farmers: Positive | This example text from a political party manifesto is positive towards policies for agriculture and farmers | | Anti-Growth Economy: Positive | This example text from a political party manifesto is in favour of anti-growth politics | | Anti-Imperialism | This example text from a political party manifesto is anti-imperialistic, for example against controlling other countries and for greater self-government of colonies | | Centralisation | This example text from a political party manifesto is in favour of political centralisation | | Civic Mindedness: Positive | This example text from a political party manifesto is positive towards national solidarity, civil society or appeals for public spiritedness or against anti-social attitudes | | Constitutionalism: Negative | This example text from a political party manifesto is positive towards constitutionalism | | Constitutionalism: Positive | This example text from a political party manifesto is positive towards constitutionalism and the status quo of the constitution | | Controlled Economy | This example text from a political party manifesto is supportive of direct government control of the economy, e.g. price control or minimum wages | | Corporatism/Mixed Economy | This example text from a political party manifesto is positive towards cooperation of government, employers, and trade unions simultaneously | | Culture: Positive | This example text from a political party manifesto is in favour of cultural policies or leisure facilities, for example museus, libraries or public sport clubs | | Decentralization | This example text from a political party manifesto is for decentralisation or federalism | | Democracy | This example text from a political party manifesto favourably mentions democracy or democratic procedures or institutions | | Economic Goals | This example text from a political party manifesto is a broad/general statement on economic goals without specifics | | Economic Growth: Positive | This example text from a political party manifesto is supportive of economic growth, for example facilitation of more production or government aid for growth | | Economic Orthodoxy | This example text from a political party manifesto is for economic orthodoxy, for example reduction of budget deficits, thrift or a strong currency | | Economic Planning | This example text from a political party manifesto is positive towards government economic planning, e.g. policy plans or strategies | | Education Expansion | This example text from a political party manifesto is about the need to expand/improve policy on education | | Education Limitation | This example text from a political party manifesto is sceptical towards state expenditure on education, for example in favour of study fees or private schools | | Environmental Protection | This example text from a political party manifesto is in favour of environmental protection, e.g. fighting climate change or 'green' policies or preservation of natural resources or animal rights | | Equality: Positive | This example text from a political party manifesto is positive towards equality or social justice, e.g. protection of underprivileged groups or fair distribution of resources | | European Community/Union: Negative | This example text from a political party manifesto negatively mentions the EU or European Community | | European Community/Union: Positive | This example text from a political party manifesto is positive towards the EU or European Community, for example EU expansion and integration | | Foreign Special Relationships: Negative | This example text from a political party manifesto is negative towards particular countries | | Foreign Special Relationships: Positive | This example text from a political party manifesto is positive towards particular countries | | Free Market Economy | This example text from a political party manifesto is in favour of a free market economy and capitalism | | Freedom and Human Rights | This example text from a political party manifesto is in favour of freedom and human rights, for example freedom of speech, assembly or against state coercion or for individualism | | Governmental and Administrative Efficiency | This example text from a political party manifesto is in favour of efficiency in government/administration, for example by restructuring civil service or improving bureaucracy | | Incentives: Positive | This example text from a political party manifesto is favourable towards supply side economic policies supporting businesses, for example for incentives like subsidies or tax breaks | | Internationalism: Negative | This example text from a political party manifesto is sceptical of internationalism, for example negative towards international cooperation, in favour of national sovereignty and unilaterialism | | Internationalism: Positive | This example text from a political party manifesto is in favour of international cooperation with other countries, for example mentions the need for aid to developing countries, or global governance | | Keynesian Demand Management | This example text from a political party manifesto is for keynesian demand management and demand side economic policies | | Labour Groups: Negative | This example text from a political party manifesto is negative towards labour groups and unions | | Labour Groups: Positive | This example text from a political party manifesto is positive towards labour groups, for example for good working conditions, fair wages or unions | | Law and Order: Positive | This example text from a political party manifesto is positive towards law and order and strict law enforcement | | Market Regulation | This example text from a political party manifesto is supports market regulation for a fair and open market, for example for consumer protection or for increased competition or for social market economy | | Marxist Analysis | This example text from a political party manifesto is positive towards Marxist-Leninist ideas or uses specific Marxist terminology | | Middle Class and Professional Groups | This example text from a political party manifesto favourably references the middle class, e.g. white colar groups or the service sector | | Military: Negative | This example text from a political party manifesto is negative towards the military, for example for decreasing military spending or disarmament | | Military: Positive | This example text from a political party manifesto is positive towards the military, for example for military spending or rearmament or military treaty obligations | | Multiculturalism: Negative | This example text from a political party manifesto is sceptical towards multiculturalism, or for cultural integration or appeals to cultural homogeneity in society | | Multiculturalism: Positive | This example text from a political party manifesto favourably mentions cultural diversity, for example for freedom of religion or linguistic heritages | | National Way of Life: Negative | This example text from a political party manifesto unfavourably mentions a country's nation and history, for example sceptical towards patriotism or national pride | | National Way of Life: Positive | This example text from a political party manifesto is positive towards the national way of life and history, for example pride of citizenship or appeals to patriotism | | Nationalisation | This example text from a political party manifesto is positive towards government ownership of industries or land or for economic nationalisation | | Non-economic Demographic Groups | This example text from a political party manifesto favourably mentions non-economic demographic groups like women, students or specific age groups | | Peace | This example text from a political party manifesto is positive towards peace and peaceful means of solving crises, for example in favour of negotiations and ending wars | | Political Authority | This example text from a political party manifesto mentions the speaker's competence to govern or other party's lack of such competence, or favourably mentions a strong/stable government | | Political Corruption | This example text from a political party manifesto is negative towards political corruption or abuse of political/bureaucratic power | | Protectionism: Negative | This example text from a political party manifesto is negative towards protectionism, in favour of free trade | | Protectionism: Positive | This example text from a political party manifesto is in favour of protectionism, for example tariffs, export subsidies | | Technology and Infrastructure: Positive | This example text from a political party manifesto is about technology and infrastructure, e.g. the importance of modernisation of industry, or supportive of public spending on infrastructure/tech | | Traditional Morality: Negative | This example text from a political party manifesto is negative towards traditional morality, for example against religious moral values, for divorce or abortion, for modern families or separation of church and state | | Traditional Morality: Positive | This example text from a political party manifesto is favourable towards traditional or religious values, for example for censorship of immoral behavour, for traditional family values or religious institutions | | Underprivileged Minority Groups | This example text from a political party manifesto favourably mentions underprivileged minorities, for example handicapped, homosexuals or immigrants | | Welfare State Expansion | This example text from a political party manifesto is positive towards the welfare state, e.g. health care, pensions or social housing | | Welfare State Limitation | This example text from a political party manifesto is for limiting the welfare state, for example public funding for social services or social security, e.g. private care before state care |
TheMistoAI/MistoLine
TheMistoAI
"2024-05-17T12:17:27Z"
59,414
381
diffusers
[ "diffusers", "art", "stable diffusion", "ControlNet", "SDXL", "Diffusion-XL", "text-to-image", "arxiv:2302.05543", "license:openrail++", "region:us" ]
text-to-image
"2024-05-07T10:15:16Z"
--- license: openrail++ tags: - art - stable diffusion - ControlNet - SDXL - Diffusion-XL pipeline_tag: text-to-image --- # MistoLine ## Control Every Line! ![Intro Image](assets/intro.png) [GitHub Repo](https://github.com/TheMistoAI/MistoLine) ## NEWS!!!!! Anyline-preprocessor is released!!!! [Anyline Repo](https://github.com/TheMistoAI/ComfyUI-Anyline) **MistoLine: A Versatile and Robust SDXL-ControlNet Model for Adaptable Line Art Conditioning.** MistoLine is an SDXL-ControlNet model that can adapt to any type of line art input, demonstrating high accuracy and excellent stability. It can generate high-quality images (with a short side greater than 1024px) based on user-provided line art of various types, including hand-drawn sketches, different ControlNet line preprocessors, and model-generated outlines. MistoLine eliminates the need to select different ControlNet models for different line preprocessors, as it exhibits strong generalization capabilities across diverse line art conditions. We developed MistoLine by employing a novel line preprocessing algorithm **[Anyline](https://github.com/TheMistoAI/ComfyUI-Anyline)** and retraining the ControlNet model based on the Unet of stabilityai/ stable-diffusion-xl-base-1.0, along with innovations in large model training engineering. MistoLine showcases superior performance across different types of line art inputs, surpassing existing ControlNet models in terms of detail restoration, prompt alignment, and stability, particularly in more complex scenarios. MistoLine maintains consistency with the ControlNet architecture released by @lllyasviel, as illustrated in the following schematic diagram: ![ControlNet architecture](assets/controlnet_1.png) ![ControlNet architecture](assets/controlnet_2.png) *reference:https://github.com/lllyasviel/ControlNet* More information about ControlNet can be found in the following references: https://github.com/lllyasviel/ControlNet https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl The model is compatible with most SDXL models, except for PlaygroundV2.5, CosXL, and SDXL-Lightning(maybe). It can be used in conjunction with LCM and other ControlNet models. The following usage of this model is not allowed: * Violating laws and regulations * Harming or exploiting minors * Creating and spreading false information * Infringing on others' privacy * Defaming or harassing others * Automated decision-making that harms others' legal rights * Discrimination based on social behavior or personal characteristics * Exploiting the vulnerabilities of specific groups to mislead their behavior * Discrimination based on legally protected characteristics * Providing medical advice and diagnostic results * Improperly generating and using information for purposes such as law enforcement and immigration If you use or distribute this model for commercial purposes, you must comply with the following conditions: 1. Clearly acknowledge the contribution of TheMisto.ai to this model in the documentation, website, or other prominent and visible locations of your product. Example: "This product uses the MistoLine-SDXL-ControlNet developed by TheMisto.ai." 2. If your product includes about screens, readme files, or other similar display areas, you must include the above attribution information in those areas. 3. If your product does not have the aforementioned areas, you must include the attribution information in other reasonable locations within the product to ensure that end-users can notice it. 4. You must not imply in any way that TheMisto.ai endorses or promotes your product. The use of the attribution information is solely to indicate the origin of this model. If you have any questions about how to provide attribution in specific cases, please contact info@themisto.ai. 署名条款 如果您在商业用途中使用或分发本模型,您必须满足以下条件: 1. 在产品的文档,网站,或其他主要可见位置,明确提及 TheMisto.ai 对本软件的贡献。 示例: "本产品使用了 TheMisto.ai 开发的 MistoLine-SDXL-ControlNet。" 2. 如果您的产品包含有关屏幕,说明文件,或其他类似的显示区域,您必须在这些区域中包含上述署名信息。 3. 如果您的产品没有上述区域,您必须在产品的其他合理位置包含署名信息,以确保最终用户能够注意到。 4. 您不得以任何方式暗示 TheMisto.ai 为您的产品背书或促销。署名信息的使用仅用于表明本模型的来源。 如果您对如何在特定情况下提供署名有任何疑问,请联系info@themisto.ai。 The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. ## Apply with Different Line Preprocessors ![preprocessors](assets/preprocessors.png) ## Compere with Other Controlnets ![comparison](assets/comparison.png) ## Application Examples ### Sketch Rendering *The following case only utilized MistoLine as the controlnet:* ![Sketch Rendering](assets/sketch_rendering.png) ### Model Rendering *The following case only utilized Anyline as the preprocessor and MistoLine as the controlnet.* ![Model Rendering](assets/model_rendering.png) ## ComfyUI Recommended Parameters ``` sampler steps:30 CFG:7.0 sampler_name:dpmpp_2m_sde scheduler:karras denoise:0.93 controlnet_strength:1.0 stargt_percent:0.0 end_percent:0.9 ``` ## Diffusers pipeline Make sure to first install the libraries: ``` pip install accelerate transformers safetensors opencv-python diffusers ``` And then we're ready to go: ``` from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL from diffusers.utils import load_image from PIL import Image import torch import numpy as np import cv2 prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" negative_prompt = 'low quality, bad quality, sketches' image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png") controlnet_conditioning_scale = 0.5 controlnet = ControlNetModel.from_pretrained( "TheMistoAI/MistoLine", torch_dtype=torch.float16, variant="fp16", ) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, ) pipe.enable_model_cpu_offload() image = np.array(image) image = cv2.Canny(image, 100, 200) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) image = Image.fromarray(image) images = pipe( prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale, ).images images[0].save(f"hug_lab.png") ``` ## Checkpoints * mistoLine_rank256.safetensors : General usage version, for ComfyUI and AUTOMATIC1111-WebUI. * mistoLine_fp16.safetensors : FP16 weights, for ComfyUI and AUTOMATIC1111-WebUI. ## !!!mistoLine_rank256.safetensors better than mistoLine_fp16.safetensors ## !!!mistoLine_rank256.safetensors 表现更加出色!! ## ComfyUI Usage ![ComfyUI](assets/comfyui.png) ## 中国(大陆地区)便捷下载地址: 链接:https://pan.baidu.com/s/1DbZWmGJ40Uzr3Iz9RNBG_w?pwd=8mzs 提取码:8mzs ## Citation ``` @misc{ title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang, Anyi Rao, Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
defog/sqlcoder-7b-2
defog
"2024-02-12T14:06:11Z"
59,370
276
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-05T14:36:51Z"
--- license: cc-by-sa-4.0 library_name: transformers pipeline_tag: text-generation --- # Update notice The model weights were updated at 7 AM UTC on Feb 7, 2024. The new model weights lead to a much more performant model – particularly for joins. If you downloaded the model before that, please redownload the weights for best performance. # Model Card for SQLCoder-7B-2 A capable large language model for natural language to SQL generation. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/AYUE2y14vy2XkD9MZpScu.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Defog, Inc](https://defog.ai) - **Model type:** [Text to SQL] - **License:** [CC-by-SA-4.0] - **Finetuned from model:** [CodeLlama-7B] ### Model Sources [optional] - [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha) - [**GitHub:**](https://github.com/defog-ai/sqlcoder) - [**Demo:**](https://defog.ai/sqlcoder-demo/) ## Uses This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool. This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access. ## How to Get Started with the Model Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model. ## Prompt Please use the following prompt for optimal results. Please remember to use `do_sample=False` and `num_beams=4` for optimal results. ``` ### Task Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION] ### Database Schema The query will run on a database with the following schema: {table_metadata_string_DDL_statements} ### Answer Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION] [SQL] ``` ## Evaluation This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities. You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/). ### Results We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category. | | date | group_by | order_by | ratio | join | where | | -------------- | ---- | -------- | -------- | ----- | ---- | ----- | | sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 | | sqlcoder-7b-2 | 96 | 91.4 | 94.3 | 91.4 | 94.3 | 77.1 | | sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 | | gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 | | gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 | | natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 | | sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 | | gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 | | claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 | ## Model Card Contact Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [founders@defog.ai](mailto:founders@defog.ai)
THUDM/CogVideoX-2b
THUDM
"2024-09-05T06:55:25Z"
59,325
242
diffusers
[ "diffusers", "safetensors", "cogvideox", "video-generation", "thudm", "text-to-video", "en", "arxiv:2408.06072", "license:apache-2.0", "diffusers:CogVideoXPipeline", "region:us" ]
text-to-video
"2024-08-05T14:13:31Z"
--- license: apache-2.0 language: - en tags: - cogvideox - video-generation - thudm - text-to-video inference: false --- # CogVideoX-2B <p style="text-align: center;"> <div align="center"> <img src=https://github.com/THUDM/CogVideo/raw/main/resources/logo.svg width="50%"/> </div> <p align="center"> <a href="https://huggingface.co/THUDM/CogVideoX-2b/blob/main/README_zh.md">📄 中文阅读</a> | <a href="https://huggingface.co/spaces/THUDM/CogVideoX-2B-Space">🤗 Huggingface Space</a> | <a href="https://github.com/THUDM/CogVideo">🌐 Github </a> | <a href="https://arxiv.org/pdf/2408.06072">📜 arxiv </a> </p> <p align="center"> 📍 Visit <a href="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">QingYing</a> and <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">API Platform</a> to experience commercial video generation models. </p> ## Demo Show <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Video Gallery with Captions</title> <style> .video-container { display: flex; flex-wrap: wrap; justify-content: space-around; } .video-item { width: 45%; margin-bottom: 20px; transition: transform 0.3s; } .video-item:hover { transform: scale(1.1); } .caption { text-align: center; margin-top: 10px; font-size: 11px; } </style> </head> <body> <div class="video-container"> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/1.mp4" type="video/mp4"> </video> <div class="caption">A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/2.mp4" type="video/mp4"> </video> <div class="caption">The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it’s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/3.mp4" type="video/mp4"> </video> <div class="caption">A street artist, clad in a worn-out denim jacket and a colorful bandana, stands before a vast concrete wall in the heart, holding a can of spray paint, spray-painting a colorful bird on a mottled wall.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/4.mp4" type="video/mp4"> </video> <div class="caption"> In the haunting backdrop of a war-torn city, where ruins and crumbled walls tell a story of devastation, a poignant close-up frames a young girl. Her face is smudged with ash, a silent testament to the chaos around her. Her eyes glistening with a mix of sorrow and resilience, capturing the raw emotion of a world that has lost its innocence to the ravages of conflict.</div> </div> </div> </body> </html> ## Model Introduction CogVideoX is an open-source version of the video generation model originating from [QingYing](https://chatglm.cn/video?lang=en?fr=osm_cogvideo). The table below displays the list of video generation models we currently offer, along with their foundational information. <table style="border-collapse: collapse; width: 100%;"> <tr> <th style="text-align: center;">Model Name</th> <th style="text-align: center;">CogVideoX-2B (This Repository)</th> <th style="text-align: center;">CogVideoX-5B</th> </tr> <tr> <td style="text-align: center;">Model Description</td> <td style="text-align: center;">Entry-level model, balancing compatibility. Low cost for running and secondary development.</td> <td style="text-align: center;">Larger model with higher video generation quality and better visual effects.</td> </tr> <tr> <td style="text-align: center;">Inference Precision</td> <td style="text-align: center;"><b>FP16* (Recommended)</b>, BF16, FP32, FP8*, INT8, no support for INT4</td> <td style="text-align: center;"><b>BF16 (Recommended)</b>, FP16, FP32, FP8*, INT8, no support for INT4</td> </tr> <tr> <td style="text-align: center;">Single GPU VRAM Consumption<br></td> <td style="text-align: center;"><a href="https://github.com/THUDM/SwissArmyTransformer">SAT</a> FP16: 18GB <br><b>diffusers FP16: starting from 4GB*</b><br><b>diffusers INT8(torchao): starting from 3.6GB*</b></td> <td style="text-align: center;"><a href="https://github.com/THUDM/SwissArmyTransformer">SAT</a> BF16: 26GB <br><b>diffusers BF16: starting from 5GB*</b><br><b>diffusers INT8(torchao): starting from 4.4GB*</b></td> </tr> <tr> <td style="text-align: center;">Multi-GPU Inference VRAM Consumption</td> <td style="text-align: center;"><b>FP16: 10GB* using diffusers</b></td> <td style="text-align: center;"><b>BF16: 15GB* using diffusers</b></td> </tr> <tr> <td style="text-align: center;">Inference Speed<br>(Step = 50, FP/BF16)</td> <td style="text-align: center;">Single A100: ~90 seconds<br>Single H100: ~45 seconds</td> <td style="text-align: center;">Single A100: ~180 seconds<br>Single H100: ~90 seconds</td> </tr> <tr> <td style="text-align: center;">Fine-tuning Precision</td> <td style="text-align: center;"><b>FP16</b></td> <td style="text-align: center;"><b>BF16</b></td> </tr> <tr> <td style="text-align: center;">Fine-tuning VRAM Consumption (per GPU)</td> <td style="text-align: center;">47 GB (bs=1, LORA)<br> 61 GB (bs=2, LORA)<br> 62GB (bs=1, SFT)</td> <td style="text-align: center;">63 GB (bs=1, LORA)<br> 80 GB (bs=2, LORA)<br> 75GB (bs=1, SFT)</td> </tr> <tr> <td style="text-align: center;">Prompt Language</td> <td colspan="2" style="text-align: center;">English*</td> </tr> <tr> <td style="text-align: center;">Prompt Length Limit</td> <td colspan="2" style="text-align: center;">226 Tokens</td> </tr> <tr> <td style="text-align: center;">Video Length</td> <td colspan="2" style="text-align: center;">6 Seconds</td> </tr> <tr> <td style="text-align: center;">Frame Rate</td> <td colspan="2" style="text-align: center;">8 Frames per Second</td> </tr> <tr> <td style="text-align: center;">Video Resolution</td> <td colspan="2" style="text-align: center;">720 x 480, no support for other resolutions (including fine-tuning)</td> </tr> <tr> <td style="text-align: center;">Positional Encoding</td> <td style="text-align: center;">3d_sincos_pos_embed</td> <td style="text-align: center;">3d_rope_pos_embed</td> </tr> </table> **Data Explanation** + When testing using the `diffusers` library, all optimizations provided by the `diffusers` library were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than **NVIDIA A100 / H100**. Generally, this solution can be adapted to all devices with **NVIDIA Ampere architecture** and above. If the optimizations are disabled, VRAM usage will increase significantly, with peak VRAM usage being about 3 times higher than the table shows. However, speed will increase by 3-4 times. You can selectively disable some optimizations, including: ``` pipe.enable_model_cpu_offload() pipe.enable_sequential_cpu_offload() pipe.vae.enable_slicing() pipe.vae.enable_tiling() ``` + When performing multi-GPU inference, the `enable_model_cpu_offload()` optimization needs to be disabled. + Using INT8 models will reduce inference speed. This is to ensure that GPUs with lower VRAM can perform inference normally while maintaining minimal video quality loss, though inference speed will decrease significantly. + The 2B model is trained with `FP16` precision, and the 5B model is trained with `BF16` precision. We recommend using the precision the model was trained with for inference. + [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be used to quantize the text encoder, Transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO quantization is fully compatible with `torch.compile`, which can significantly improve inference speed. `FP8` precision must be used on devices with `NVIDIA H100` or above, which requires installing the `torch`, `torchao`, `diffusers`, and `accelerate` Python packages from source. `CUDA 12.4` is recommended. + The inference speed test also used the above VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only the `diffusers` version of the model supports quantization. + The model only supports English input; other languages can be translated into English during refinement by a large model. **Note** + Using [SAT](https://github.com/THUDM/SwissArmyTransformer) for inference and fine-tuning of SAT version models. Feel free to visit our GitHub for more information. ## Quick Start 🤗 This model supports deployment using the huggingface diffusers library. You can deploy it by following these steps. **We recommend that you visit our [GitHub](https://github.com/THUDM/CogVideo) and check out the relevant prompt optimizations and conversions to get a better experience.** 1. Install the required dependencies ```shell # diffusers>=0.30.1 # transformers>=0.44.0 # accelerate>=0.33.0 (suggest install from source) # imageio-ffmpeg>=0.5.1 pip install --upgrade transformers accelerate diffusers imageio-ffmpeg ``` 2. Run the code ```python import torch from diffusers import CogVideoXPipeline from diffusers.utils import export_to_video prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance." pipe = CogVideoXPipeline.from_pretrained( "THUDM/CogVideoX-2b", torch_dtype=torch.float16 ) pipe.enable_model_cpu_offload() pipe.enable_sequential_cpu_offload() pipe.vae.enable_slicing() pipe.vae.enable_tiling() video = pipe( prompt=prompt, num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0] export_to_video(video, "output.mp4", fps=8) ``` ## Quantized Inference [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be used to quantize the Text Encoder, Transformer and VAE modules to lower the memory requirement of CogVideoX. This makes it possible to run the model on free-tier T4 Colab or smaller VRAM GPUs as well! It is also worth noting that TorchAO quantization is fully compatible with `torch.compile`, which allows for much faster inference speed. ```diff # To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly. # Source and nightly installation is only required until next release. import torch from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXPipeline from diffusers.utils import export_to_video + from transformers import T5EncoderModel + from torchao.quantization import quantize_, int8_weight_only, int8_dynamic_activation_int8_weight + quantization = int8_weight_only + text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX-5b", subfolder="text_encoder", torch_dtype=torch.bfloat16) + quantize_(text_encoder, quantization()) + transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16) + quantize_(transformer, quantization()) + vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", torch_dtype=torch.bfloat16) + quantize_(vae, quantization()) # Create pipeline and run inference pipe = CogVideoXPipeline.from_pretrained( "THUDM/CogVideoX-2b", + text_encoder=text_encoder, + transformer=transformer, + vae=vae, torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() pipe.vae.enable_tiling() prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance." video = pipe( prompt=prompt, num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0] export_to_video(video, "output.mp4", fps=8) ``` Additionally, the models can be serialized and stored in a quantized datatype to save disk space when using PytorchAO. Find examples and benchmarks at these links: - [torchao](https://gist.github.com/a-r-r-o-w/4d9732d17412888c885480c6521a9897) - [quanto](https://gist.github.com/a-r-r-o-w/31be62828b00a9292821b85c1017effa) ## Explore the Model Welcome to our [github](https://github.com/THUDM/CogVideo), where you will find: 1. More detailed technical details and code explanation. 2. Optimization and conversion of prompt words. 3. Reasoning and fine-tuning of SAT version models, and even pre-release. 4. Project update log dynamics, more interactive opportunities. 5. CogVideoX toolchain to help you better use the model. 6. INT8 model inference code support. ## Model License The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the [Apache 2.0 License](LICENSE). The CogVideoX-5B model (Transformers module) is released under the [CogVideoX LICENSE](https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE). ## Citation ``` @article{yang2024cogvideox, title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer}, author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others}, journal={arXiv preprint arXiv:2408.06072}, year={2024} } ```
aubmindlab/bert-base-arabert
aubmindlab
"2023-08-03T12:32:51Z"
59,242
24
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: " عاصم +ة لبنان هي [MASK] ." --- # !!! A newer version of this model is available !!! [AraBERTv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabert" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. ## Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
anton-l/wav2vec2-large-xlsr-53-romanian
anton-l
"2021-07-05T20:20:21Z"
59,195
2
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ro", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: ro datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Romanian XLSR Wav2Vec2 Large 53 by Anton Lozhkov results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ro type: common_voice args: ro metrics: - name: Test WER type: wer value: 24.84 --- # Wav2Vec2-Large-XLSR-53-Romanian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ro", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Romanian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/" def clean_sentence(sent): sent = sent.lower() # replace non-alpha characters with space sent = "".join(ch if ch.isalpha() else " " for ch in sent) # remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 24.84 % ## Training The Common Voice `train` and `validation` datasets were used for training.
asapp/sew-d-tiny-100k-ft-ls100h
asapp
"2023-06-15T19:07:05Z"
59,154
2
transformers
[ "transformers", "pytorch", "safetensors", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - audio - speech - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: sew-d-tiny-100k-ft-ls100h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 10.47 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 22.73 --- # SEW-D-tiny [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 10.47 | 22.73 |
augmxnt/shisa-gamma-7b-v1
augmxnt
"2024-05-19T06:07:36Z"
58,892
15
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ja", "en", "dataset:augmxnt/ultra-orca-boros-en-ja-v1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-23T20:21:44Z"
--- license: apache-2.0 datasets: - augmxnt/ultra-orca-boros-en-ja-v1 language: - ja - en --- # shisa-gamma-7b-v1 For more information see our main [Shisa 7B](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/shisa-comparison.png) model We applied a version of our fine-tune data set onto [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) and it performed pretty well, just sharing since it might be of interest. Check out our [JA MT-Bench results](https://github.com/AUGMXNT/shisa/wiki/Evals-%3A-JA-MT%E2%80%90Bench). ![Comparison vs shisa-7b-v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/shisa-comparison.png) ![Comparison vs other recently released JA models](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/ja-comparison.png)
openlm-research/open_llama_3b
openlm-research
"2023-06-16T00:44:10Z"
58,847
147
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-07T09:06:48Z"
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T --- # OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation. ## Dataset and Training We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT | | ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- | | anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 | | anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 | | arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 | | arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 | | ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 | | hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 | | openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 | | piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 | | record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 | | record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 | | rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 | | wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 | | winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 | | Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
rollerhafeezh-amikom/xlm-roberta-base-ner-silvanus
rollerhafeezh-amikom
"2024-04-12T07:23:14Z"
58,584
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "silvanus", "id", "en", "es", "it", "sk", "arxiv:1911.02116", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-11-09T01:46:51Z"
--- license: mit base_model: xlm-roberta-base tags: - silvanus metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ner-silvanus results: - task: name: Token Classification type: token-classification dataset: name: id_nergrit_corpus type: id_nergrit_corpus config: ner split: validation args: ner metrics: - name: Precision type: precision value: 0.918918918918919 - name: Recall type: recall value: 0.9272727272727272 - name: F1 type: f1 value: 0.9230769230769231 - name: Accuracy type: accuracy value: 0.9858518778229216 language: - id - en - es - it - sk pipeline_tag: token-classification widget: - text: >- Kebakaran hutan dan lahan terus terjadi dan semakin meluas di Kota Palangkaraya, Kalimantan Tengah (Kalteng) pada hari Rabu, 15 Nopember 2023 20.00 WIB. Bahkan kobaran api mulai membakar pondok warga dan mendekati permukiman. BZK #RCTINews #SeputariNews #News #Karhutla #KebakaranHutan #HutanKalimantan #SILVANUS_Italian_Pilot_Testing example_title: Indonesia - text: >- Wildfire rages for a second day in Evia destroying a Natura 2000 protected pine forest. - 5:51 PM Aug 14, 2019 example_title: English - text: >- 3 nov 2023 21:57 - Incendio forestal obliga a la evacuación de hasta 850 personas cerca del pueblo de Montichelvo en Valencia. example_title: Spanish - text: >- Incendi boschivi nell'est del Paese: 2 morti e oltre 50 case distrutte nello stato del Queensland. example_title: Italian - text: >- Lesné požiare na Sicílii si vyžiadali dva ľudské životy a evakuáciu hotela http://dlvr.it/SwW3sC - 23. septembra 2023 20:57 example_title: Slovak --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-ner-silvanus This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Indonesian NER dataset. It achieves the following results on the evaluation set: - Loss: 0.0567 - Precision: 0.9189 - Recall: 0.9273 - F1: 0.9231 - Accuracy: 0.9859 ## Model description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **License:** More information needed - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) - **Resources for more information:** [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) ## Intended uses & limitations This model can be used to extract multilingual information such as location, date and time on social media (Twitter, etc.). This model is limited by an Indonesian language training data set to be tested in 4 languages (English, Spanish, Italian and Slovak) using zero-shot transfer learning techniques to extract multilingual information. ## Training and evaluation data This model was fine-tuned on Indonesian NER datasets. Abbreviation|Description -|- O|Outside of a named entity B-LOC |Beginning of a location right after another location I-LOC |Location B-DAT |Beginning of a date right after another date I-DAT |Date B-TIM |Beginning of a time right after another time I-TIM |Time ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1394 | 1.0 | 827 | 0.0559 | 0.8808 | 0.9257 | 0.9027 | 0.9842 | | 0.0468 | 2.0 | 1654 | 0.0575 | 0.9107 | 0.9190 | 0.9148 | 0.9849 | | 0.0279 | 3.0 | 2481 | 0.0567 | 0.9189 | 0.9273 | 0.9231 | 0.9859 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
laion/clap-htsat-unfused
laion
"2023-04-24T14:39:57Z"
58,443
40
transformers
[ "transformers", "pytorch", "clap", "feature-extraction", "arxiv:2211.06687", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-02-16T20:47:08Z"
--- license: apache-2.0 --- # Model card for CLAP Model card for CLAP: Contrastive Language-Audio Pretraining ![clap_image](https://s3.amazonaws.com/moonup/production/uploads/1678811100805-62441d1d9fdefb55a0b7d12c.png) # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Citation](#citation) # TL;DR The abstract of the paper states that: > Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public. # Usage You can use this model for zero shot audio classification or extracting audio and/or textual features. # Uses ## Perform zero-shot audio classification ### Using `pipeline` ```python from datasets import load_dataset from transformers import pipeline dataset = load_dataset("ashraq/esc50") audio = dataset["train"]["audio"][-1]["array"] audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused") output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"]) print(output) >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}] ``` ## Run the model: You can also get the audio and text embeddings using `ClapModel` ### Run the model on CPU: ```python from datasets import load_dataset from transformers import ClapModel, ClapProcessor librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") audio_sample = librispeech_dummy[0] model = ClapModel.from_pretrained("laion/clap-htsat-unfused") processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused") inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt") audio_embed = model.get_audio_features(**inputs) ``` ### Run the model on GPU: ```python from datasets import load_dataset from transformers import ClapModel, ClapProcessor librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") audio_sample = librispeech_dummy[0] model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0) processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused") inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0) audio_embed = model.get_audio_features(**inputs) ``` # Citation If you are using this model for your work, please consider citing the original paper: ``` @misc{https://doi.org/10.48550/arxiv.2211.06687, doi = {10.48550/ARXIV.2211.06687}, url = {https://arxiv.org/abs/2211.06687}, author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo}, keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
jinaai/jina-embeddings-v2-base-de
jinaai
"2024-08-06T14:41:13Z"
58,383
63
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "fill-mask", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "custom_code", "de", "en", "arxiv:2108.12409", "arxiv:2402.17016", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "region:eu" ]
feature-extraction
"2024-01-12T14:04:50Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - transformers - transformers.js language: - de - en inference: false license: apache-2.0 model-index: - name: jina-embeddings-v2-base-de results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.76119402985076 - type: ap value: 35.99577188521176 - type: f1 value: 67.50397431543269 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (de) config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.9186295503212 - type: ap value: 79.73307115840507 - type: f1 value: 66.66245744831339 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 77.52215 - type: ap value: 71.85051037177416 - type: f1 value: 77.4171096157774 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.498 - type: f1 value: 38.058193386555956 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.717999999999996 - type: f1 value: 37.22674371574757 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 25.319999999999997 - type: map_at_10 value: 40.351 - type: map_at_100 value: 41.435 - type: map_at_1000 value: 41.443000000000005 - type: map_at_3 value: 35.266 - type: map_at_5 value: 37.99 - type: mrr_at_1 value: 25.746999999999996 - type: mrr_at_10 value: 40.515 - type: mrr_at_100 value: 41.606 - type: mrr_at_1000 value: 41.614000000000004 - type: mrr_at_3 value: 35.42 - type: mrr_at_5 value: 38.112 - type: ndcg_at_1 value: 25.319999999999997 - type: ndcg_at_10 value: 49.332 - type: ndcg_at_100 value: 53.909 - type: ndcg_at_1000 value: 54.089 - type: ndcg_at_3 value: 38.705 - type: ndcg_at_5 value: 43.606 - type: precision_at_1 value: 25.319999999999997 - type: precision_at_10 value: 7.831 - type: precision_at_100 value: 0.9820000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.24 - type: precision_at_5 value: 12.119 - type: recall_at_1 value: 25.319999999999997 - type: recall_at_10 value: 78.307 - type: recall_at_100 value: 98.222 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.72 - type: recall_at_5 value: 60.597 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 41.43100588255654 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.08988904593667 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.55514765595906 - type: mrr value: 73.51393835465858 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 79.6723823121172 - type: cos_sim_spearman value: 76.90596922214986 - type: euclidean_pearson value: 77.87910737957918 - type: euclidean_spearman value: 76.66319260598262 - type: manhattan_pearson value: 77.37039493457965 - type: manhattan_spearman value: 76.09872191280964 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.97703549060543 - type: f1 value: 98.86569241475296 - type: precision value: 98.81002087682673 - type: recall value: 98.97703549060543 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.93506493506493 - type: f1 value: 83.91014949949302 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 34.970675877585144 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.779230269190954 - task: type: Clustering dataset: type: slvnwhrl/blurbs-clustering-p2p name: MTEB BlurbsClusteringP2P config: default split: test revision: a2dd5b02a77de3466a3eaa98ae586b5610314496 metrics: - type: v_measure value: 35.490175601567216 - task: type: Clustering dataset: type: slvnwhrl/blurbs-clustering-s2s name: MTEB BlurbsClusteringS2S config: default split: test revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d metrics: - type: v_measure value: 16.16638280560168 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.830999999999996 - type: map_at_10 value: 41.355 - type: map_at_100 value: 42.791000000000004 - type: map_at_1000 value: 42.918 - type: map_at_3 value: 38.237 - type: map_at_5 value: 40.066 - type: mrr_at_1 value: 38.484 - type: mrr_at_10 value: 47.593 - type: mrr_at_100 value: 48.388 - type: mrr_at_1000 value: 48.439 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 46.724 - type: ndcg_at_1 value: 38.484 - type: ndcg_at_10 value: 47.27 - type: ndcg_at_100 value: 52.568000000000005 - type: ndcg_at_1000 value: 54.729000000000006 - type: ndcg_at_3 value: 43.061 - type: ndcg_at_5 value: 45.083 - type: precision_at_1 value: 38.484 - type: precision_at_10 value: 8.927 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 20.791999999999998 - type: precision_at_5 value: 14.85 - type: recall_at_1 value: 30.830999999999996 - type: recall_at_10 value: 57.87799999999999 - type: recall_at_100 value: 80.124 - type: recall_at_1000 value: 94.208 - type: recall_at_3 value: 45.083 - type: recall_at_5 value: 51.154999999999994 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.782 - type: map_at_10 value: 34.492 - type: map_at_100 value: 35.521 - type: map_at_1000 value: 35.638 - type: map_at_3 value: 31.735999999999997 - type: map_at_5 value: 33.339 - type: mrr_at_1 value: 32.357 - type: mrr_at_10 value: 39.965 - type: mrr_at_100 value: 40.644000000000005 - type: mrr_at_1000 value: 40.695 - type: mrr_at_3 value: 37.739 - type: mrr_at_5 value: 39.061 - type: ndcg_at_1 value: 32.357 - type: ndcg_at_10 value: 39.644 - type: ndcg_at_100 value: 43.851 - type: ndcg_at_1000 value: 46.211999999999996 - type: ndcg_at_3 value: 35.675000000000004 - type: ndcg_at_5 value: 37.564 - type: precision_at_1 value: 32.357 - type: precision_at_10 value: 7.344 - type: precision_at_100 value: 1.201 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 17.155 - type: precision_at_5 value: 12.166 - type: recall_at_1 value: 25.782 - type: recall_at_10 value: 49.132999999999996 - type: recall_at_100 value: 67.24 - type: recall_at_1000 value: 83.045 - type: recall_at_3 value: 37.021 - type: recall_at_5 value: 42.548 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 35.778999999999996 - type: map_at_10 value: 47.038000000000004 - type: map_at_100 value: 48.064 - type: map_at_1000 value: 48.128 - type: map_at_3 value: 44.186 - type: map_at_5 value: 45.788000000000004 - type: mrr_at_1 value: 41.254000000000005 - type: mrr_at_10 value: 50.556999999999995 - type: mrr_at_100 value: 51.296 - type: mrr_at_1000 value: 51.331 - type: mrr_at_3 value: 48.318 - type: mrr_at_5 value: 49.619 - type: ndcg_at_1 value: 41.254000000000005 - type: ndcg_at_10 value: 52.454 - type: ndcg_at_100 value: 56.776 - type: ndcg_at_1000 value: 58.181000000000004 - type: ndcg_at_3 value: 47.713 - type: ndcg_at_5 value: 49.997 - type: precision_at_1 value: 41.254000000000005 - type: precision_at_10 value: 8.464 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 21.526 - type: precision_at_5 value: 14.696000000000002 - type: recall_at_1 value: 35.778999999999996 - type: recall_at_10 value: 64.85300000000001 - type: recall_at_100 value: 83.98400000000001 - type: recall_at_1000 value: 94.18299999999999 - type: recall_at_3 value: 51.929 - type: recall_at_5 value: 57.666 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.719 - type: map_at_10 value: 29.326999999999998 - type: map_at_100 value: 30.314000000000004 - type: map_at_1000 value: 30.397000000000002 - type: map_at_3 value: 27.101 - type: map_at_5 value: 28.141 - type: mrr_at_1 value: 23.503 - type: mrr_at_10 value: 31.225 - type: mrr_at_100 value: 32.096000000000004 - type: mrr_at_1000 value: 32.159 - type: mrr_at_3 value: 29.076999999999998 - type: mrr_at_5 value: 30.083 - type: ndcg_at_1 value: 23.503 - type: ndcg_at_10 value: 33.842 - type: ndcg_at_100 value: 39.038000000000004 - type: ndcg_at_1000 value: 41.214 - type: ndcg_at_3 value: 29.347 - type: ndcg_at_5 value: 31.121 - type: precision_at_1 value: 23.503 - type: precision_at_10 value: 5.266 - type: precision_at_100 value: 0.831 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.504999999999999 - type: precision_at_5 value: 8.565000000000001 - type: recall_at_1 value: 21.719 - type: recall_at_10 value: 46.024 - type: recall_at_100 value: 70.78999999999999 - type: recall_at_1000 value: 87.022 - type: recall_at_3 value: 33.64 - type: recall_at_5 value: 37.992 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.601 - type: map_at_10 value: 22.054000000000002 - type: map_at_100 value: 23.177 - type: map_at_1000 value: 23.308 - type: map_at_3 value: 19.772000000000002 - type: map_at_5 value: 21.055 - type: mrr_at_1 value: 19.403000000000002 - type: mrr_at_10 value: 26.409 - type: mrr_at_100 value: 27.356 - type: mrr_at_1000 value: 27.441 - type: mrr_at_3 value: 24.108999999999998 - type: mrr_at_5 value: 25.427 - type: ndcg_at_1 value: 19.403000000000002 - type: ndcg_at_10 value: 26.474999999999998 - type: ndcg_at_100 value: 32.086 - type: ndcg_at_1000 value: 35.231 - type: ndcg_at_3 value: 22.289 - type: ndcg_at_5 value: 24.271 - type: precision_at_1 value: 19.403000000000002 - type: precision_at_10 value: 4.813 - type: precision_at_100 value: 0.8869999999999999 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 10.531 - type: precision_at_5 value: 7.710999999999999 - type: recall_at_1 value: 15.601 - type: recall_at_10 value: 35.916 - type: recall_at_100 value: 60.8 - type: recall_at_1000 value: 83.245 - type: recall_at_3 value: 24.321 - type: recall_at_5 value: 29.372999999999998 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.522 - type: map_at_10 value: 34.854 - type: map_at_100 value: 36.269 - type: map_at_1000 value: 36.387 - type: map_at_3 value: 32.187 - type: map_at_5 value: 33.692 - type: mrr_at_1 value: 31.375999999999998 - type: mrr_at_10 value: 40.471000000000004 - type: mrr_at_100 value: 41.481 - type: mrr_at_1000 value: 41.533 - type: mrr_at_3 value: 38.274 - type: mrr_at_5 value: 39.612 - type: ndcg_at_1 value: 31.375999999999998 - type: ndcg_at_10 value: 40.298 - type: ndcg_at_100 value: 46.255 - type: ndcg_at_1000 value: 48.522 - type: ndcg_at_3 value: 36.049 - type: ndcg_at_5 value: 38.095 - type: precision_at_1 value: 31.375999999999998 - type: precision_at_10 value: 7.305000000000001 - type: precision_at_100 value: 1.201 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.107999999999999 - type: recall_at_1 value: 25.522 - type: recall_at_10 value: 50.988 - type: recall_at_100 value: 76.005 - type: recall_at_1000 value: 91.11200000000001 - type: recall_at_3 value: 38.808 - type: recall_at_5 value: 44.279 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.615000000000002 - type: map_at_10 value: 32.843 - type: map_at_100 value: 34.172999999999995 - type: map_at_1000 value: 34.286 - type: map_at_3 value: 30.125 - type: map_at_5 value: 31.495 - type: mrr_at_1 value: 30.023 - type: mrr_at_10 value: 38.106 - type: mrr_at_100 value: 39.01 - type: mrr_at_1000 value: 39.071 - type: mrr_at_3 value: 35.674 - type: mrr_at_5 value: 36.924 - type: ndcg_at_1 value: 30.023 - type: ndcg_at_10 value: 38.091 - type: ndcg_at_100 value: 43.771 - type: ndcg_at_1000 value: 46.315 - type: ndcg_at_3 value: 33.507 - type: ndcg_at_5 value: 35.304 - type: precision_at_1 value: 30.023 - type: precision_at_10 value: 6.837999999999999 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 15.562999999999999 - type: precision_at_5 value: 10.936 - type: recall_at_1 value: 24.615000000000002 - type: recall_at_10 value: 48.691 - type: recall_at_100 value: 72.884 - type: recall_at_1000 value: 90.387 - type: recall_at_3 value: 35.659 - type: recall_at_5 value: 40.602 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.223666666666666 - type: map_at_10 value: 31.338166666666673 - type: map_at_100 value: 32.47358333333333 - type: map_at_1000 value: 32.5955 - type: map_at_3 value: 28.84133333333333 - type: map_at_5 value: 30.20808333333333 - type: mrr_at_1 value: 27.62483333333333 - type: mrr_at_10 value: 35.385916666666674 - type: mrr_at_100 value: 36.23325 - type: mrr_at_1000 value: 36.29966666666667 - type: mrr_at_3 value: 33.16583333333333 - type: mrr_at_5 value: 34.41983333333334 - type: ndcg_at_1 value: 27.62483333333333 - type: ndcg_at_10 value: 36.222 - type: ndcg_at_100 value: 41.29491666666666 - type: ndcg_at_1000 value: 43.85508333333333 - type: ndcg_at_3 value: 31.95116666666667 - type: ndcg_at_5 value: 33.88541666666667 - type: precision_at_1 value: 27.62483333333333 - type: precision_at_10 value: 6.339916666666667 - type: precision_at_100 value: 1.0483333333333333 - type: precision_at_1000 value: 0.14608333333333334 - type: precision_at_3 value: 14.726500000000003 - type: precision_at_5 value: 10.395 - type: recall_at_1 value: 23.223666666666666 - type: recall_at_10 value: 46.778999999999996 - type: recall_at_100 value: 69.27141666666667 - type: recall_at_1000 value: 87.27383333333334 - type: recall_at_3 value: 34.678749999999994 - type: recall_at_5 value: 39.79900000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.677 - type: map_at_10 value: 27.828000000000003 - type: map_at_100 value: 28.538999999999998 - type: map_at_1000 value: 28.64 - type: map_at_3 value: 26.105 - type: map_at_5 value: 27.009 - type: mrr_at_1 value: 24.387 - type: mrr_at_10 value: 30.209999999999997 - type: mrr_at_100 value: 30.953000000000003 - type: mrr_at_1000 value: 31.029 - type: mrr_at_3 value: 28.707 - type: mrr_at_5 value: 29.610999999999997 - type: ndcg_at_1 value: 24.387 - type: ndcg_at_10 value: 31.378 - type: ndcg_at_100 value: 35.249 - type: ndcg_at_1000 value: 37.923 - type: ndcg_at_3 value: 28.213 - type: ndcg_at_5 value: 29.658 - type: precision_at_1 value: 24.387 - type: precision_at_10 value: 4.8309999999999995 - type: precision_at_100 value: 0.73 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.168 - type: precision_at_5 value: 8.251999999999999 - type: recall_at_1 value: 21.677 - type: recall_at_10 value: 40.069 - type: recall_at_100 value: 58.077 - type: recall_at_1000 value: 77.97 - type: recall_at_3 value: 31.03 - type: recall_at_5 value: 34.838 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.484 - type: map_at_10 value: 20.355 - type: map_at_100 value: 21.382 - type: map_at_1000 value: 21.511 - type: map_at_3 value: 18.448 - type: map_at_5 value: 19.451999999999998 - type: mrr_at_1 value: 17.584 - type: mrr_at_10 value: 23.825 - type: mrr_at_100 value: 24.704 - type: mrr_at_1000 value: 24.793000000000003 - type: mrr_at_3 value: 21.92 - type: mrr_at_5 value: 22.97 - type: ndcg_at_1 value: 17.584 - type: ndcg_at_10 value: 24.315 - type: ndcg_at_100 value: 29.354999999999997 - type: ndcg_at_1000 value: 32.641999999999996 - type: ndcg_at_3 value: 20.802 - type: ndcg_at_5 value: 22.335 - type: precision_at_1 value: 17.584 - type: precision_at_10 value: 4.443 - type: precision_at_100 value: 0.8160000000000001 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 9.807 - type: precision_at_5 value: 7.0889999999999995 - type: recall_at_1 value: 14.484 - type: recall_at_10 value: 32.804 - type: recall_at_100 value: 55.679 - type: recall_at_1000 value: 79.63 - type: recall_at_3 value: 22.976 - type: recall_at_5 value: 26.939 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.983999999999998 - type: map_at_10 value: 30.812 - type: map_at_100 value: 31.938 - type: map_at_1000 value: 32.056000000000004 - type: map_at_3 value: 28.449999999999996 - type: map_at_5 value: 29.542 - type: mrr_at_1 value: 27.145999999999997 - type: mrr_at_10 value: 34.782999999999994 - type: mrr_at_100 value: 35.699 - type: mrr_at_1000 value: 35.768 - type: mrr_at_3 value: 32.572 - type: mrr_at_5 value: 33.607 - type: ndcg_at_1 value: 27.145999999999997 - type: ndcg_at_10 value: 35.722 - type: ndcg_at_100 value: 40.964 - type: ndcg_at_1000 value: 43.598 - type: ndcg_at_3 value: 31.379 - type: ndcg_at_5 value: 32.924 - type: precision_at_1 value: 27.145999999999997 - type: precision_at_10 value: 6.063000000000001 - type: precision_at_100 value: 0.9730000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 14.366000000000001 - type: precision_at_5 value: 9.776 - type: recall_at_1 value: 22.983999999999998 - type: recall_at_10 value: 46.876 - type: recall_at_100 value: 69.646 - type: recall_at_1000 value: 88.305 - type: recall_at_3 value: 34.471000000000004 - type: recall_at_5 value: 38.76 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.017000000000003 - type: map_at_10 value: 31.049 - type: map_at_100 value: 32.582 - type: map_at_1000 value: 32.817 - type: map_at_3 value: 28.303 - type: map_at_5 value: 29.854000000000003 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.56 - type: mrr_at_100 value: 36.453 - type: mrr_at_1000 value: 36.519 - type: mrr_at_3 value: 32.938 - type: mrr_at_5 value: 34.391 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.506 - type: ndcg_at_100 value: 42.344 - type: ndcg_at_1000 value: 45.213 - type: ndcg_at_3 value: 31.805 - type: ndcg_at_5 value: 33.933 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 7.016 - type: precision_at_100 value: 1.468 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 14.822 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.017000000000003 - type: recall_at_10 value: 47.053 - type: recall_at_100 value: 73.177 - type: recall_at_1000 value: 91.47800000000001 - type: recall_at_3 value: 33.675 - type: recall_at_5 value: 39.36 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.673 - type: map_at_10 value: 24.051000000000002 - type: map_at_100 value: 24.933 - type: map_at_1000 value: 25.06 - type: map_at_3 value: 21.446 - type: map_at_5 value: 23.064 - type: mrr_at_1 value: 18.115000000000002 - type: mrr_at_10 value: 25.927 - type: mrr_at_100 value: 26.718999999999998 - type: mrr_at_1000 value: 26.817999999999998 - type: mrr_at_3 value: 23.383000000000003 - type: mrr_at_5 value: 25.008999999999997 - type: ndcg_at_1 value: 18.115000000000002 - type: ndcg_at_10 value: 28.669 - type: ndcg_at_100 value: 33.282000000000004 - type: ndcg_at_1000 value: 36.481 - type: ndcg_at_3 value: 23.574 - type: ndcg_at_5 value: 26.340000000000003 - type: precision_at_1 value: 18.115000000000002 - type: precision_at_10 value: 4.769 - type: precision_at_100 value: 0.767 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 10.351 - type: precision_at_5 value: 7.8 - type: recall_at_1 value: 16.673 - type: recall_at_10 value: 41.063 - type: recall_at_100 value: 62.851 - type: recall_at_1000 value: 86.701 - type: recall_at_3 value: 27.532 - type: recall_at_5 value: 34.076 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 8.752 - type: map_at_10 value: 15.120000000000001 - type: map_at_100 value: 16.678 - type: map_at_1000 value: 16.854 - type: map_at_3 value: 12.603 - type: map_at_5 value: 13.918 - type: mrr_at_1 value: 19.283 - type: mrr_at_10 value: 29.145 - type: mrr_at_100 value: 30.281000000000002 - type: mrr_at_1000 value: 30.339 - type: mrr_at_3 value: 26.069 - type: mrr_at_5 value: 27.864 - type: ndcg_at_1 value: 19.283 - type: ndcg_at_10 value: 21.804000000000002 - type: ndcg_at_100 value: 28.576 - type: ndcg_at_1000 value: 32.063 - type: ndcg_at_3 value: 17.511 - type: ndcg_at_5 value: 19.112000000000002 - type: precision_at_1 value: 19.283 - type: precision_at_10 value: 6.873 - type: precision_at_100 value: 1.405 - type: precision_at_1000 value: 0.20500000000000002 - type: precision_at_3 value: 13.16 - type: precision_at_5 value: 10.189 - type: recall_at_1 value: 8.752 - type: recall_at_10 value: 27.004 - type: recall_at_100 value: 50.648 - type: recall_at_1000 value: 70.458 - type: recall_at_3 value: 16.461000000000002 - type: recall_at_5 value: 20.973 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 6.81 - type: map_at_10 value: 14.056 - type: map_at_100 value: 18.961 - type: map_at_1000 value: 20.169 - type: map_at_3 value: 10.496 - type: map_at_5 value: 11.952 - type: mrr_at_1 value: 53.5 - type: mrr_at_10 value: 63.479 - type: mrr_at_100 value: 63.971999999999994 - type: mrr_at_1000 value: 63.993 - type: mrr_at_3 value: 61.541999999999994 - type: mrr_at_5 value: 62.778999999999996 - type: ndcg_at_1 value: 42.25 - type: ndcg_at_10 value: 31.471 - type: ndcg_at_100 value: 35.115 - type: ndcg_at_1000 value: 42.408 - type: ndcg_at_3 value: 35.458 - type: ndcg_at_5 value: 32.973 - type: precision_at_1 value: 53.5 - type: precision_at_10 value: 24.85 - type: precision_at_100 value: 7.79 - type: precision_at_1000 value: 1.599 - type: precision_at_3 value: 38.667 - type: precision_at_5 value: 31.55 - type: recall_at_1 value: 6.81 - type: recall_at_10 value: 19.344 - type: recall_at_100 value: 40.837 - type: recall_at_1000 value: 64.661 - type: recall_at_3 value: 11.942 - type: recall_at_5 value: 14.646 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.64499999999999 - type: f1 value: 39.39106911352714 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 48.196 - type: map_at_10 value: 61.404 - type: map_at_100 value: 61.846000000000004 - type: map_at_1000 value: 61.866 - type: map_at_3 value: 58.975 - type: map_at_5 value: 60.525 - type: mrr_at_1 value: 52.025 - type: mrr_at_10 value: 65.43299999999999 - type: mrr_at_100 value: 65.80799999999999 - type: mrr_at_1000 value: 65.818 - type: mrr_at_3 value: 63.146 - type: mrr_at_5 value: 64.64 - type: ndcg_at_1 value: 52.025 - type: ndcg_at_10 value: 67.889 - type: ndcg_at_100 value: 69.864 - type: ndcg_at_1000 value: 70.337 - type: ndcg_at_3 value: 63.315 - type: ndcg_at_5 value: 65.91799999999999 - type: precision_at_1 value: 52.025 - type: precision_at_10 value: 9.182 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 25.968000000000004 - type: precision_at_5 value: 17.006 - type: recall_at_1 value: 48.196 - type: recall_at_10 value: 83.885 - type: recall_at_100 value: 92.671 - type: recall_at_1000 value: 96.018 - type: recall_at_3 value: 71.59 - type: recall_at_5 value: 77.946 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 15.193000000000001 - type: map_at_10 value: 25.168000000000003 - type: map_at_100 value: 27.017000000000003 - type: map_at_1000 value: 27.205000000000002 - type: map_at_3 value: 21.746 - type: map_at_5 value: 23.579 - type: mrr_at_1 value: 31.635999999999996 - type: mrr_at_10 value: 40.077 - type: mrr_at_100 value: 41.112 - type: mrr_at_1000 value: 41.160999999999994 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.18 - type: ndcg_at_1 value: 31.635999999999996 - type: ndcg_at_10 value: 32.298 - type: ndcg_at_100 value: 39.546 - type: ndcg_at_1000 value: 42.88 - type: ndcg_at_3 value: 29.221999999999998 - type: ndcg_at_5 value: 30.069000000000003 - type: precision_at_1 value: 31.635999999999996 - type: precision_at_10 value: 9.367 - type: precision_at_100 value: 1.645 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 20.01 - type: precision_at_5 value: 14.753 - type: recall_at_1 value: 15.193000000000001 - type: recall_at_10 value: 38.214999999999996 - type: recall_at_100 value: 65.95 - type: recall_at_1000 value: 85.85300000000001 - type: recall_at_3 value: 26.357000000000003 - type: recall_at_5 value: 31.319999999999997 - task: type: Retrieval dataset: type: jinaai/ger_da_lir name: MTEB GerDaLIR config: default split: test revision: None metrics: - type: map_at_1 value: 10.363 - type: map_at_10 value: 16.222 - type: map_at_100 value: 17.28 - type: map_at_1000 value: 17.380000000000003 - type: map_at_3 value: 14.054 - type: map_at_5 value: 15.203 - type: mrr_at_1 value: 11.644 - type: mrr_at_10 value: 17.625 - type: mrr_at_100 value: 18.608 - type: mrr_at_1000 value: 18.695999999999998 - type: mrr_at_3 value: 15.481 - type: mrr_at_5 value: 16.659 - type: ndcg_at_1 value: 11.628 - type: ndcg_at_10 value: 20.028000000000002 - type: ndcg_at_100 value: 25.505 - type: ndcg_at_1000 value: 28.288000000000004 - type: ndcg_at_3 value: 15.603 - type: ndcg_at_5 value: 17.642 - type: precision_at_1 value: 11.628 - type: precision_at_10 value: 3.5589999999999997 - type: precision_at_100 value: 0.664 - type: precision_at_1000 value: 0.092 - type: precision_at_3 value: 7.109999999999999 - type: precision_at_5 value: 5.401 - type: recall_at_1 value: 10.363 - type: recall_at_10 value: 30.586000000000002 - type: recall_at_100 value: 56.43 - type: recall_at_1000 value: 78.142 - type: recall_at_3 value: 18.651 - type: recall_at_5 value: 23.493 - task: type: Retrieval dataset: type: deepset/germandpr name: MTEB GermanDPR config: default split: test revision: 5129d02422a66be600ac89cd3e8531b4f97d347d metrics: - type: map_at_1 value: 60.78 - type: map_at_10 value: 73.91499999999999 - type: map_at_100 value: 74.089 - type: map_at_1000 value: 74.09400000000001 - type: map_at_3 value: 71.87 - type: map_at_5 value: 73.37700000000001 - type: mrr_at_1 value: 60.78 - type: mrr_at_10 value: 73.91499999999999 - type: mrr_at_100 value: 74.089 - type: mrr_at_1000 value: 74.09400000000001 - type: mrr_at_3 value: 71.87 - type: mrr_at_5 value: 73.37700000000001 - type: ndcg_at_1 value: 60.78 - type: ndcg_at_10 value: 79.35600000000001 - type: ndcg_at_100 value: 80.077 - type: ndcg_at_1000 value: 80.203 - type: ndcg_at_3 value: 75.393 - type: ndcg_at_5 value: 78.077 - type: precision_at_1 value: 60.78 - type: precision_at_10 value: 9.59 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 28.52 - type: precision_at_5 value: 18.4 - type: recall_at_1 value: 60.78 - type: recall_at_10 value: 95.902 - type: recall_at_100 value: 99.024 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.56099999999999 - type: recall_at_5 value: 92.0 - task: type: STS dataset: type: jinaai/german-STSbenchmark name: MTEB GermanSTSBenchmark config: default split: test revision: 49d9b423b996fea62b483f9ee6dfb5ec233515ca metrics: - type: cos_sim_pearson value: 88.49524420894356 - type: cos_sim_spearman value: 88.32407839427714 - type: euclidean_pearson value: 87.25098779877104 - type: euclidean_spearman value: 88.22738098593608 - type: manhattan_pearson value: 87.23872691839607 - type: manhattan_spearman value: 88.2002968380165 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 31.81 - type: map_at_10 value: 46.238 - type: map_at_100 value: 47.141 - type: map_at_1000 value: 47.213 - type: map_at_3 value: 43.248999999999995 - type: map_at_5 value: 45.078 - type: mrr_at_1 value: 63.619 - type: mrr_at_10 value: 71.279 - type: mrr_at_100 value: 71.648 - type: mrr_at_1000 value: 71.665 - type: mrr_at_3 value: 69.76599999999999 - type: mrr_at_5 value: 70.743 - type: ndcg_at_1 value: 63.619 - type: ndcg_at_10 value: 55.38999999999999 - type: ndcg_at_100 value: 58.80800000000001 - type: ndcg_at_1000 value: 60.331999999999994 - type: ndcg_at_3 value: 50.727 - type: ndcg_at_5 value: 53.284 - type: precision_at_1 value: 63.619 - type: precision_at_10 value: 11.668000000000001 - type: precision_at_100 value: 1.434 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 32.001000000000005 - type: precision_at_5 value: 21.223 - type: recall_at_1 value: 31.81 - type: recall_at_10 value: 58.339 - type: recall_at_100 value: 71.708 - type: recall_at_1000 value: 81.85 - type: recall_at_3 value: 48.001 - type: recall_at_5 value: 53.059 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 68.60640000000001 - type: ap value: 62.84296904042086 - type: f1 value: 68.50643633327537 - task: type: Reranking dataset: type: jinaai/miracl name: MTEB MIRACL config: default split: test revision: 8741c3b61cd36ed9ca1b3d4203543a41793239e2 metrics: - type: map value: 64.29704335389768 - type: mrr value: 72.11962197159565 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.3844049247606 - type: f1 value: 89.2124328528015 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.36855452240067 - type: f1 value: 87.35458822097442 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.48654810761514 - type: f1 value: 50.07229882504409 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.832065370526905 - type: f1 value: 46.283579383385806 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.89038332212509 - type: f1 value: 61.86279849685129 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 67.44780095350535 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25084061869536 - type: f1 value: 71.43965023016408 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.73907195696032 - type: f1 value: 73.69920814839061 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.32577306498249 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.759349326367783 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.401342674703425 - type: mrr value: 31.384379585660987 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.855 - type: map_at_10 value: 10.01 - type: map_at_100 value: 12.461 - type: map_at_1000 value: 13.776 - type: map_at_3 value: 7.252 - type: map_at_5 value: 8.679 - type: mrr_at_1 value: 41.176 - type: mrr_at_10 value: 49.323 - type: mrr_at_100 value: 49.954 - type: mrr_at_1000 value: 49.997 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.375 - type: ndcg_at_1 value: 39.318999999999996 - type: ndcg_at_10 value: 28.607 - type: ndcg_at_100 value: 26.554 - type: ndcg_at_1000 value: 35.731 - type: ndcg_at_3 value: 32.897999999999996 - type: ndcg_at_5 value: 31.53 - type: precision_at_1 value: 41.176 - type: precision_at_10 value: 20.867 - type: precision_at_100 value: 6.796 - type: precision_at_1000 value: 1.983 - type: precision_at_3 value: 30.547 - type: precision_at_5 value: 27.245 - type: recall_at_1 value: 4.855 - type: recall_at_10 value: 14.08 - type: recall_at_100 value: 28.188000000000002 - type: recall_at_1000 value: 60.07900000000001 - type: recall_at_3 value: 7.947 - type: recall_at_5 value: 10.786 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 26.906999999999996 - type: map_at_10 value: 41.147 - type: map_at_100 value: 42.269 - type: map_at_1000 value: 42.308 - type: map_at_3 value: 36.638999999999996 - type: map_at_5 value: 39.285 - type: mrr_at_1 value: 30.359 - type: mrr_at_10 value: 43.607 - type: mrr_at_100 value: 44.454 - type: mrr_at_1000 value: 44.481 - type: mrr_at_3 value: 39.644 - type: mrr_at_5 value: 42.061 - type: ndcg_at_1 value: 30.330000000000002 - type: ndcg_at_10 value: 48.899 - type: ndcg_at_100 value: 53.612 - type: ndcg_at_1000 value: 54.51200000000001 - type: ndcg_at_3 value: 40.262 - type: ndcg_at_5 value: 44.787 - type: precision_at_1 value: 30.330000000000002 - type: precision_at_10 value: 8.323 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 18.395 - type: precision_at_5 value: 13.627 - type: recall_at_1 value: 26.906999999999996 - type: recall_at_10 value: 70.215 - type: recall_at_100 value: 90.61200000000001 - type: recall_at_1000 value: 97.294 - type: recall_at_3 value: 47.784 - type: recall_at_5 value: 58.251 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX config: default split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 60.5 - type: cos_sim_ap value: 57.606096528877494 - type: cos_sim_f1 value: 62.24240307369892 - type: cos_sim_precision value: 45.27439024390244 - type: cos_sim_recall value: 99.55307262569832 - type: dot_accuracy value: 57.699999999999996 - type: dot_ap value: 51.289351057160616 - type: dot_f1 value: 62.25953130465197 - type: dot_precision value: 45.31568228105906 - type: dot_recall value: 99.4413407821229 - type: euclidean_accuracy value: 60.45 - type: euclidean_ap value: 57.616461421424034 - type: euclidean_f1 value: 62.313697657913416 - type: euclidean_precision value: 45.657826313052524 - type: euclidean_recall value: 98.10055865921787 - type: manhattan_accuracy value: 60.3 - type: manhattan_ap value: 57.580565271667325 - type: manhattan_f1 value: 62.24240307369892 - type: manhattan_precision value: 45.27439024390244 - type: manhattan_recall value: 99.55307262569832 - type: max_accuracy value: 60.5 - type: max_ap value: 57.616461421424034 - type: max_f1 value: 62.313697657913416 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.21300000000001 - type: map_at_10 value: 84.136 - type: map_at_100 value: 84.796 - type: map_at_1000 value: 84.812 - type: map_at_3 value: 81.182 - type: map_at_5 value: 83.027 - type: mrr_at_1 value: 80.91000000000001 - type: mrr_at_10 value: 87.155 - type: mrr_at_100 value: 87.27000000000001 - type: mrr_at_1000 value: 87.271 - type: mrr_at_3 value: 86.158 - type: mrr_at_5 value: 86.828 - type: ndcg_at_1 value: 80.88 - type: ndcg_at_10 value: 87.926 - type: ndcg_at_100 value: 89.223 - type: ndcg_at_1000 value: 89.321 - type: ndcg_at_3 value: 85.036 - type: ndcg_at_5 value: 86.614 - type: precision_at_1 value: 80.88 - type: precision_at_10 value: 13.350000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.173 - type: precision_at_5 value: 24.476 - type: recall_at_1 value: 70.21300000000001 - type: recall_at_10 value: 95.12 - type: recall_at_100 value: 99.535 - type: recall_at_1000 value: 99.977 - type: recall_at_3 value: 86.833 - type: recall_at_5 value: 91.26100000000001 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 47.754688783184875 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 54.875736374329364 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 3.773 - type: map_at_10 value: 9.447 - type: map_at_100 value: 11.1 - type: map_at_1000 value: 11.37 - type: map_at_3 value: 6.787 - type: map_at_5 value: 8.077 - type: mrr_at_1 value: 18.5 - type: mrr_at_10 value: 28.227000000000004 - type: mrr_at_100 value: 29.445 - type: mrr_at_1000 value: 29.515 - type: mrr_at_3 value: 25.2 - type: mrr_at_5 value: 27.055 - type: ndcg_at_1 value: 18.5 - type: ndcg_at_10 value: 16.29 - type: ndcg_at_100 value: 23.250999999999998 - type: ndcg_at_1000 value: 28.445999999999998 - type: ndcg_at_3 value: 15.376000000000001 - type: ndcg_at_5 value: 13.528 - type: precision_at_1 value: 18.5 - type: precision_at_10 value: 8.51 - type: precision_at_100 value: 1.855 - type: precision_at_1000 value: 0.311 - type: precision_at_3 value: 14.533 - type: precision_at_5 value: 12.0 - type: recall_at_1 value: 3.773 - type: recall_at_10 value: 17.282 - type: recall_at_100 value: 37.645 - type: recall_at_1000 value: 63.138000000000005 - type: recall_at_3 value: 8.853 - type: recall_at_5 value: 12.168 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.32789517976525 - type: cos_sim_spearman value: 80.32750384145629 - type: euclidean_pearson value: 81.5025131452508 - type: euclidean_spearman value: 80.24797115147175 - type: manhattan_pearson value: 81.51634463412002 - type: manhattan_spearman value: 80.24614721495055 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.47050448992432 - type: cos_sim_spearman value: 80.58919997743621 - type: euclidean_pearson value: 85.83258918113664 - type: euclidean_spearman value: 80.97441389240902 - type: manhattan_pearson value: 85.7798262013878 - type: manhattan_spearman value: 80.97208703064196 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.95341439711532 - type: cos_sim_spearman value: 86.59127484634989 - type: euclidean_pearson value: 85.57850603454227 - type: euclidean_spearman value: 86.47130477363419 - type: manhattan_pearson value: 85.59387925447652 - type: manhattan_spearman value: 86.50665427391583 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.39810909161844 - type: cos_sim_spearman value: 82.98595295546008 - type: euclidean_pearson value: 84.04681129969951 - type: euclidean_spearman value: 82.98197460689866 - type: manhattan_pearson value: 83.9918798171185 - type: manhattan_spearman value: 82.91148131768082 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.02072712147692 - type: cos_sim_spearman value: 88.78821332623012 - type: euclidean_pearson value: 88.12132045572747 - type: euclidean_spearman value: 88.74273451067364 - type: manhattan_pearson value: 88.05431550059166 - type: manhattan_spearman value: 88.67610233020723 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.96134704624787 - type: cos_sim_spearman value: 84.44062976314666 - type: euclidean_pearson value: 84.03642536310323 - type: euclidean_spearman value: 84.4535014579785 - type: manhattan_pearson value: 83.92874228901483 - type: manhattan_spearman value: 84.33634314951631 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-de) config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.3154168064887 - type: cos_sim_spearman value: 86.72393652571682 - type: euclidean_pearson value: 86.04193246174164 - type: euclidean_spearman value: 86.30482896608093 - type: manhattan_pearson value: 85.95524084651859 - type: manhattan_spearman value: 86.06031431994282 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.91079682750804 - type: cos_sim_spearman value: 89.30961836617064 - type: euclidean_pearson value: 88.86249564158628 - type: euclidean_spearman value: 89.04772899592396 - type: manhattan_pearson value: 88.85579791315043 - type: manhattan_spearman value: 88.94190462541333 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.00558145551088 - type: cos_sim_spearman value: 67.96601170393878 - type: euclidean_pearson value: 67.87627043214336 - type: euclidean_spearman value: 66.76402572303859 - type: manhattan_pearson value: 67.88306560555452 - type: manhattan_spearman value: 66.6273862035506 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de) config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 50.83759332748726 - type: cos_sim_spearman value: 59.066344562858006 - type: euclidean_pearson value: 50.08955848154131 - type: euclidean_spearman value: 58.36517305855221 - type: manhattan_pearson value: 50.05257267223111 - type: manhattan_spearman value: 58.37570252804986 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-en) config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.22749007956492 - type: cos_sim_spearman value: 55.97282077657827 - type: euclidean_pearson value: 62.10661533695752 - type: euclidean_spearman value: 53.62780854854067 - type: manhattan_pearson value: 62.37138085709719 - type: manhattan_spearman value: 54.17556356828155 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.91145397065878 - type: cos_sim_spearman value: 88.13960018389005 - type: euclidean_pearson value: 87.67618876224006 - type: euclidean_spearman value: 87.99119480810556 - type: manhattan_pearson value: 87.67920297334753 - type: manhattan_spearman value: 87.99113250064492 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.09133563707582 - type: mrr value: 93.2415288052543 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 47.760999999999996 - type: map_at_10 value: 56.424 - type: map_at_100 value: 57.24399999999999 - type: map_at_1000 value: 57.278 - type: map_at_3 value: 53.68000000000001 - type: map_at_5 value: 55.442 - type: mrr_at_1 value: 50.666999999999994 - type: mrr_at_10 value: 58.012 - type: mrr_at_100 value: 58.736 - type: mrr_at_1000 value: 58.769000000000005 - type: mrr_at_3 value: 56.056 - type: mrr_at_5 value: 57.321999999999996 - type: ndcg_at_1 value: 50.666999999999994 - type: ndcg_at_10 value: 60.67700000000001 - type: ndcg_at_100 value: 64.513 - type: ndcg_at_1000 value: 65.62400000000001 - type: ndcg_at_3 value: 56.186 - type: ndcg_at_5 value: 58.692 - type: precision_at_1 value: 50.666999999999994 - type: precision_at_10 value: 8.200000000000001 - type: precision_at_100 value: 1.023 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 21.889 - type: precision_at_5 value: 14.866999999999999 - type: recall_at_1 value: 47.760999999999996 - type: recall_at_10 value: 72.006 - type: recall_at_100 value: 89.767 - type: recall_at_1000 value: 98.833 - type: recall_at_3 value: 60.211000000000006 - type: recall_at_5 value: 66.3 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 94.86690691995835 - type: cos_sim_f1 value: 89.37875751503007 - type: cos_sim_precision value: 89.5582329317269 - type: cos_sim_recall value: 89.2 - type: dot_accuracy value: 99.76336633663367 - type: dot_ap value: 94.26453740761586 - type: dot_f1 value: 88.00783162016641 - type: dot_precision value: 86.19367209971237 - type: dot_recall value: 89.9 - type: euclidean_accuracy value: 99.7940594059406 - type: euclidean_ap value: 94.85459757524379 - type: euclidean_f1 value: 89.62779156327544 - type: euclidean_precision value: 88.96551724137932 - type: euclidean_recall value: 90.3 - type: manhattan_accuracy value: 99.79009900990098 - type: manhattan_ap value: 94.76971336654465 - type: manhattan_f1 value: 89.35323383084577 - type: manhattan_precision value: 88.91089108910892 - type: manhattan_recall value: 89.8 - type: max_accuracy value: 99.7940594059406 - type: max_ap value: 94.86690691995835 - type: max_f1 value: 89.62779156327544 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.38197670064987 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.08330158937971 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.50367079063226 - type: mrr value: 50.30444943128768 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.37739520909561 - type: cos_sim_spearman value: 31.548500943973913 - type: dot_pearson value: 29.983610104303 - type: dot_spearman value: 29.90185869098618 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.5810000000000002 - type: map_at_100 value: 9.064 - type: map_at_1000 value: 22.161 - type: map_at_3 value: 0.536 - type: map_at_5 value: 0.8370000000000001 - type: mrr_at_1 value: 80.0 - type: mrr_at_10 value: 86.75 - type: mrr_at_100 value: 86.799 - type: mrr_at_1000 value: 86.799 - type: mrr_at_3 value: 85.0 - type: mrr_at_5 value: 86.5 - type: ndcg_at_1 value: 73.0 - type: ndcg_at_10 value: 65.122 - type: ndcg_at_100 value: 51.853 - type: ndcg_at_1000 value: 47.275 - type: ndcg_at_3 value: 66.274 - type: ndcg_at_5 value: 64.826 - type: precision_at_1 value: 80.0 - type: precision_at_10 value: 70.19999999999999 - type: precision_at_100 value: 53.480000000000004 - type: precision_at_1000 value: 20.946 - type: precision_at_3 value: 71.333 - type: precision_at_5 value: 70.0 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.884 - type: recall_at_100 value: 12.57 - type: recall_at_1000 value: 44.208999999999996 - type: recall_at_3 value: 0.5890000000000001 - type: recall_at_5 value: 0.95 - task: type: Clustering dataset: type: slvnwhrl/tenkgnad-clustering-p2p name: MTEB TenKGnadClusteringP2P config: default split: test revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558 metrics: - type: v_measure value: 42.84199261133083 - task: type: Clustering dataset: type: slvnwhrl/tenkgnad-clustering-s2s name: MTEB TenKGnadClusteringS2S config: default split: test revision: 6cddbe003f12b9b140aec477b583ac4191f01786 metrics: - type: v_measure value: 23.689557114798838 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.941 - type: map_at_10 value: 8.222 - type: map_at_100 value: 14.277999999999999 - type: map_at_1000 value: 15.790000000000001 - type: map_at_3 value: 4.4670000000000005 - type: map_at_5 value: 5.762 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 38.784 - type: mrr_at_100 value: 39.724 - type: mrr_at_1000 value: 39.724 - type: mrr_at_3 value: 33.333 - type: mrr_at_5 value: 37.415 - type: ndcg_at_1 value: 22.448999999999998 - type: ndcg_at_10 value: 21.026 - type: ndcg_at_100 value: 33.721000000000004 - type: ndcg_at_1000 value: 45.045 - type: ndcg_at_3 value: 20.053 - type: ndcg_at_5 value: 20.09 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.469 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 21.769 - type: precision_at_5 value: 21.224 - type: recall_at_1 value: 1.941 - type: recall_at_10 value: 14.915999999999999 - type: recall_at_100 value: 46.155 - type: recall_at_1000 value: 80.664 - type: recall_at_3 value: 5.629 - type: recall_at_5 value: 8.437 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.64800000000001 - type: ap value: 12.914826731261094 - type: f1 value: 53.05213503422915 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.427277872099594 - type: f1 value: 60.78292007556828 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.48134168406559 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.79465935506944 - type: cos_sim_ap value: 70.24589055290592 - type: cos_sim_f1 value: 65.0994575045208 - type: cos_sim_precision value: 63.76518218623482 - type: cos_sim_recall value: 66.49076517150397 - type: dot_accuracy value: 84.63968528342374 - type: dot_ap value: 69.84683095084355 - type: dot_f1 value: 64.50606169727523 - type: dot_precision value: 59.1719885487778 - type: dot_recall value: 70.89709762532982 - type: euclidean_accuracy value: 84.76485664898374 - type: euclidean_ap value: 70.20556438685551 - type: euclidean_f1 value: 65.06796614516543 - type: euclidean_precision value: 63.29840319361277 - type: euclidean_recall value: 66.93931398416886 - type: manhattan_accuracy value: 84.72313286046374 - type: manhattan_ap value: 70.17151475534308 - type: manhattan_f1 value: 65.31379180759113 - type: manhattan_precision value: 62.17505366086334 - type: manhattan_recall value: 68.7862796833773 - type: max_accuracy value: 84.79465935506944 - type: max_ap value: 70.24589055290592 - type: max_f1 value: 65.31379180759113 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.95874568246207 - type: cos_sim_ap value: 85.82517548264127 - type: cos_sim_f1 value: 78.22288041466125 - type: cos_sim_precision value: 75.33875338753387 - type: cos_sim_recall value: 81.33661841700031 - type: dot_accuracy value: 88.836496293709 - type: dot_ap value: 85.53430720252186 - type: dot_f1 value: 78.10616085869725 - type: dot_precision value: 74.73269555430501 - type: dot_recall value: 81.79858330766862 - type: euclidean_accuracy value: 88.92769821865176 - type: euclidean_ap value: 85.65904346964223 - type: euclidean_f1 value: 77.98774074208407 - type: euclidean_precision value: 73.72282795035315 - type: euclidean_recall value: 82.77640899291654 - type: manhattan_accuracy value: 88.86366282454303 - type: manhattan_ap value: 85.61599642231819 - type: manhattan_f1 value: 78.01480509061737 - type: manhattan_precision value: 74.10460685833044 - type: manhattan_recall value: 82.36064059131506 - type: max_accuracy value: 88.95874568246207 - type: max_ap value: 85.82517548264127 - type: max_f1 value: 78.22288041466125 - task: type: Retrieval dataset: type: None name: MTEB WikiCLIR config: default split: test revision: None metrics: - type: map_at_1 value: 3.9539999999999997 - type: map_at_10 value: 7.407 - type: map_at_100 value: 8.677999999999999 - type: map_at_1000 value: 9.077 - type: map_at_3 value: 5.987 - type: map_at_5 value: 6.6979999999999995 - type: mrr_at_1 value: 35.65 - type: mrr_at_10 value: 45.097 - type: mrr_at_100 value: 45.83 - type: mrr_at_1000 value: 45.871 - type: mrr_at_3 value: 42.63 - type: mrr_at_5 value: 44.104 - type: ndcg_at_1 value: 29.215000000000003 - type: ndcg_at_10 value: 22.694 - type: ndcg_at_100 value: 22.242 - type: ndcg_at_1000 value: 27.069 - type: ndcg_at_3 value: 27.641 - type: ndcg_at_5 value: 25.503999999999998 - type: precision_at_1 value: 35.65 - type: precision_at_10 value: 12.795000000000002 - type: precision_at_100 value: 3.354 - type: precision_at_1000 value: 0.743 - type: precision_at_3 value: 23.403 - type: precision_at_5 value: 18.474 - type: recall_at_1 value: 3.9539999999999997 - type: recall_at_10 value: 11.301 - type: recall_at_100 value: 22.919999999999998 - type: recall_at_1000 value: 40.146 - type: recall_at_3 value: 7.146 - type: recall_at_5 value: 8.844000000000001 - task: type: Retrieval dataset: type: jinaai/xmarket_de name: MTEB XMarket config: default split: test revision: 2336818db4c06570fcdf263e1bcb9993b786f67a metrics: - type: map_at_1 value: 4.872 - type: map_at_10 value: 10.658 - type: map_at_100 value: 13.422999999999998 - type: map_at_1000 value: 14.245 - type: map_at_3 value: 7.857 - type: map_at_5 value: 9.142999999999999 - type: mrr_at_1 value: 16.744999999999997 - type: mrr_at_10 value: 24.416 - type: mrr_at_100 value: 25.432 - type: mrr_at_1000 value: 25.502999999999997 - type: mrr_at_3 value: 22.096 - type: mrr_at_5 value: 23.421 - type: ndcg_at_1 value: 16.695999999999998 - type: ndcg_at_10 value: 18.66 - type: ndcg_at_100 value: 24.314 - type: ndcg_at_1000 value: 29.846 - type: ndcg_at_3 value: 17.041999999999998 - type: ndcg_at_5 value: 17.585 - type: precision_at_1 value: 16.695999999999998 - type: precision_at_10 value: 10.374 - type: precision_at_100 value: 3.988 - type: precision_at_1000 value: 1.1860000000000002 - type: precision_at_3 value: 14.21 - type: precision_at_5 value: 12.623000000000001 - type: recall_at_1 value: 4.872 - type: recall_at_10 value: 18.624 - type: recall_at_100 value: 40.988 - type: recall_at_1000 value: 65.33 - type: recall_at_3 value: 10.162 - type: recall_at_5 value: 13.517999999999999 --- <!-- TODO: add evaluation results here --> <br><br> <p align="center"> <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Jina AI logo: Jina AI is your Portal to Multimodal AI" width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b> </p> ## Quick Start The easiest way to starting using `jina-embeddings-v2-base-de` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/). ## Intended Usage & Model Info `jina-embeddings-v2-base-de` is a German/English bilingual text **embedding model** supporting **8192 sequence length**. It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length. We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed German-English input without bias. Additionally, we provide the following embedding models: `jina-embeddings-v2-base-de` ist ein zweisprachiges **Text Embedding Modell** für Deutsch und Englisch, welches Texteingaben mit einer Länge von bis zu **8192 Token unterstützt**. Es basiert auf der adaptierten Bert-Modell-Architektur JinaBERT, welche mithilfe einer symmetrische Variante von [ALiBi](https://arxiv.org/abs/2108.12409) längere Eingabetexte erlaubt. Wir haben, das Model für hohe Performance in einsprachigen und cross-lingual Anwendungen entwickelt und speziell darauf trainiert, gemischte deutsch-englische Eingaben ohne einen Bias zu kodieren. Des Weiteren stellen wir folgende Embedding-Modelle bereit: - [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters. - [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters. - [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings. - [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings **(you are here)**. - [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon). - [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings. ## Data & Parameters The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016). ## Usage **<details><summary>Please apply mean pooling when integrating the model.</summary>** <p> ### Why mean pooling? `mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level. It has been proved to be the most effective way to produce high-quality sentence embeddings. We offer an `encode` function to deal with this. However, if you would like to do it without using the default `encode` function: ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['How is the weather today?', 'What is the current weather like today?'] tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-de') model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) ``` </p> </details> You can use Jina Embedding models directly from transformers package. ```python !pip install transformers import torch from transformers import AutoModel from numpy.linalg import norm cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16) embeddings = model.encode(['How is the weather today?', 'Wie ist das Wetter heute?']) print(cos_sim(embeddings[0], embeddings[1])) ``` If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function: ```python embeddings = model.encode( ['Very long ... document'], max_length=2048 ) ``` Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well): ```python !pip install -U sentence-transformers from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "jinaai/jina-embeddings-v2-base-de", # switch to en/zh for English or Chinese trust_remote_code=True ) # control your input sequence length up to 8192 model.max_seq_length = 1024 embeddings = model.encode([ 'How is the weather today?', 'Wie ist das Wetter heute?' ]) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Alternatives to Using Transformers Package 1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/). 2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy). ## Benchmark Results We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks: <img src="de_evaluation_results.png" width="780px"> ## Use Jina Embeddings for RAG According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83), > In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out. <img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px"> ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` @article{mohr2024multi, title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings}, author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others}, journal={arXiv preprint arXiv:2402.17016}, year={2024} } ```
city96/FLUX.1-schnell-gguf
city96
"2024-08-18T08:13:37Z"
58,265
107
gguf
[ "gguf", "text-to-image", "image-generation", "flux", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:quantized:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
text-to-image
"2024-08-15T02:30:12Z"
--- base_model: black-forest-labs/FLUX.1-schnell library_name: gguf license: apache-2.0 quantized_by: city96 tags: - text-to-image - image-generation - flux --- This is a direct GGUF conversion of [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
davidkim205/iris-7b
davidkim205
"2024-04-04T09:01:24Z"
58,190
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "translation", "en", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
"2024-03-25T08:16:09Z"
--- library_name: transformers license: apache-2.0 language: - en - ko pipeline_tag: translation tags: - finetuned inference: true widget: - messages: - role: user content: 다음 문장을 한글로 번역하세요. Iris is a model for Korean-English sentence translation based on deep learning. --- # iris ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64241c3d774cc340797429fc/3had0ckV_asjoJB9fKKKy.jpeg) Iris is a model for Korean-English sentence translation based on deep learning. It is used to translate Korean sentences into English or English sentences into Korean by utilizing advanced natural language processing technology. The model is trained to understand the grammar, vocabulary, and context of each language and generate appropriate translations. Iris provides efficient and accurate translation and can be used in a variety of applications. ## Model Details * **Model Developers** : davidkim(changyeon kim) * **Repository** : will be updated soon. * **base mode** : mistralai/Mistral-7B-v0.2 * **dataset** : translation_v3_346k ## usage ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "davidkim205/iris-7b" model = AutoModelForCausalLM.from_pretrained(repo, torch_dtype=torch.bfloat16, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(repo) def generate(prompt): encoding = tokenizer( prompt, return_tensors='pt', return_token_type_ids=False ).to("cuda") gen_tokens = model.generate( **encoding, max_new_tokens=2048, temperature=1.0, num_beams=5, ) prompt_end_size = encoding.input_ids.shape[1] result = tokenizer.decode(gen_tokens[0, prompt_end_size:]) return result def translate_ko2en(text): prompt = f"[INST] 다음 문장을 영어로 번역하세요.{text} [/INST]" return generate(prompt) def translate_en2ko(text): prompt = f"[INST] 다음 문장을 한글로 번역하세요.{text} [/INST]" return generate(prompt) def main(): while True: text = input('>') en_text = translate_ko2en(text) ko_text = translate_en2ko(en_text) print('en_text', en_text) print('ko_text', ko_text) if __name__ == "__main__": main() ``` output ``` $ python iris_test.py Downloading shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.72it/s] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.07it/s] >아이리스는 딥러닝을 기반으로 한 한-영어 문장 번역을 위한 모델이다. en_text Iris is a model for Korean-to-English sentence translation based on deep learning.</s> ko_text 아이리스는 딥러닝을 기반으로 한 한국어-영어 문장 번역을 위한 모델이다.</s> ``` ## template ### ko -> en ``` [INST] 다음 문장을 영어로 번역하세요.{text} [/INST] ``` ### en -> ko ``` "[INST] 다음 문장을 한글로 번역하세요.{text} [/INST]" ``` ## dataset info : translation_v3_346k The dataset is not made public due to licensing issues. | src | ratio | description | | ------------------------------------------ | ----- | ------------------------------------------------------------ | | aihub-MTPE | 5.56% | 기계번역 품질 사후검증 데이터셋 | | aihub-techsci2 | 5.56% | ICT, 전기/전자 등 기술과학 분야 한영 번역 데이터셋 | | aihub-expertise | 5.56% | 의료, 금융, 스포츠 등 전문분야 한영 번역 데이터셋 | | aihub-humanities | 5.56% | 인문학 분야 한영 번역 데이터셋 | | sharegpt-deepl-ko-translation | 5.56% | shareGPT 데이터셋을 질답 형식에서 한영 번역 형식으로 변환한 데이터셋 | | aihub-MT-new-corpus | 5.56% | 기계 번역 앱 구축용 한영 번역 데이터셋 | | aihub-socialsci | 5.56% | 법률, 교육, 경제 등 사회과학 분야 한영 번역 데이터셋 | | korean-parallel-corpora | 5.56% | 한영 번역 병렬 데이터셋 | | aihub-parallel-translation | 5.56% | 발화 유형 및 분야별 한영 번역 데이터셋 | | aihub-food | 5.56% | 식품 분야 영한 번역 데이터셋 | | aihub-techsci | 5.56% | ICT, 전기/전자 등 기술과학 분야 한영 번역 데이터셋 | | para_pat | 5.56% | ParaPat 데이터셋의 영어-한국어 subset | | aihub-speechtype-based-machine-translation | 5.56% | 발화 유형별 영한 번역 데이터셋 | | koopus100 | 5.56% | OPUS-100 데이터셋의 영어-한국어 subset | | aihub-basicsci | 5.56% | 수학, 물리학 등 기초과학 분야 한영 번역 데이터셋 | | aihub-broadcast-content | 5.56% | 방송 콘텐츠 분야 한영 번역 데이터셋 | | aihub-patent | 5.56% | 특허명세서 영한 번역 데이터셋 | | aihub-colloquial | 5.56% | 신조어, 약어 등을 포함하는 구어체 한영 번역 데이터셋 | Please refer to the url below for information on aihub licensing. https://aihub.or.kr/partcptnmlrd/inqry/view.do?currMenu=144&topMenu=104 ## Evaluation https://github.com/davidkim205/translation | TYPE | Model | BLEU | SBLEU | Duplicate | Length Exceeds | | ----------- | :---------------------------------- | ---- | ----- | --------- | -------------- | | HuggingFace | facebook/nllb-200-distilled-1.3B | 0.26 | 0.30 | 1 | 3 | | HuggingFace | jbochi/madlad400-10b-mt | 0.29 | 0.38 | 3 | 6 | | HuggingFace | Unbabel/TowerInstruct-7B-v0.1 | 0.32 | 0.39 | 1 | 9 | | HuggingFace | squarelike/Gugugo-koen-7B-V1.1 | 0.32 | 0.36 | 1 | 3 | | HuggingFace | maywell/Synatra-7B-v0.3-Translation | 0.35 | 0.41 | 1 | 2 | | Cloud | deepl | 0.39 | 0.45 | 0 | 1 | | Cloud | azure | 0.40 | 0.49 | 0 | 3 | | Cloud | google | 0.40 | 0.49 | 0 | 2 | | Cloud | papago | 0.43 | 0.51 | 0 | 3 | | HuggingFace | davidkim205/iris-7b (**ours**) | 0.40 | 0.43 | 0 | 3 |
trl-internal-testing/tiny-random-GPT2LMHeadModel
trl-internal-testing
"2023-01-23T10:38:18Z"
58,039
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-12-20T10:34:11Z"
Entry not found
medicalai/ClinicalBERT
medicalai
"2023-09-15T08:46:54Z"
57,954
170
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "medical", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-03-19T15:04:41Z"
--- tags: - medical --- # ClinicalBERT <!-- Provide a quick summary of what the model is/does. --> This model card describes the ClinicalBERT model, which was trained on a large multicenter dataset with a large corpus of 1.2B words of diverse diseases we constructed. We then utilized a large-scale corpus of EHRs from over 3 million patient records to fine tune the base language model. ## Pretraining Data The ClinicalBERT model was trained on a large multicenter dataset with a large corpus of 1.2B words of diverse diseases we constructed. <!-- For more details, see here. --> ## Model Pretraining ### Pretraining Procedures The ClinicalBERT was initialized from BERT. Then the training followed the principle of masked language model, in which given a piece of text, we randomly replace some tokens by MASKs, special tokens for masking, and then require the model to predict the original tokens via contextual text. ### Pretraining Hyperparameters We used a batch size of 32, a maximum sequence length of 256, and a learning rate of 5e-5 for pre-training our models. ## How to use the model Load the model via the transformers library: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("medicalai/ClinicalBERT") model = AutoModel.from_pretrained("medicalai/ClinicalBERT") ``` ## Citation Please cite this article: Wang, G., Liu, X., Ying, Z. et al. Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial. Nat Med (2023). https://doi.org/10.1038/s41591-023-02552-9
jinaai/jina-bert-flash-implementation
jinaai
"2024-05-31T12:41:05Z"
57,911
2
transformers
[ "transformers", "bert", "custom_code", "endpoints_compatible", "region:eu" ]
null
"2024-02-21T11:19:46Z"
# BERT with Flash-Attention ### Installing dependencies To run the model on GPU, you need to install Flash Attention. You may either install from pypi (which may not work with fused-dense), or from source. To install from source, clone the GitHub repository: ```console git clone git@github.com:Dao-AILab/flash-attention.git ``` The code provided here should work with commit `43950dd`. Change to the cloned repo and install: ```console cd flash-attention && python setup.py install ``` This will compile the flash-attention kernel, which will take some time. If you would like to use fused MLPs (e.g. to use activation checkpointing), you may install fused-dense also from source: ```console cd csrc/fused_dense_lib && python setup.py install ``` ### Configuration The config adds some new parameters: - `use_flash_attn`: If `True`, always use flash attention. If `None`, use flash attention when GPU is available. If `False`, never use flash attention (works on CPU). - `window_size`: Size (left and right) of the local attention window. If `(-1, -1)`, use global attention - `dense_seq_output`: If true, we only need to pass the hidden states for the masked out token (around 15%) to the classifier heads. I set this to true for pretraining. - `fused_mlp`: Whether to use fused-dense. Useful to reduce VRAM in combination with activation checkpointing - `mlp_checkpoint_lvl`: One of `{0, 1, 2}`. Increasing this increases the amount of activation checkpointing within the MLP. Keep this at 0 for pretraining and use gradient accumulation instead. For embedding training, increase this as much as needed. - `last_layer_subset`: If true, we only need the compute the last layer for a subset of tokens. I left this to false. - `use_qk_norm`: Whether or not to use QK-normalization - `num_loras`: Number of LoRAs to use when initializing a `BertLoRA` model. Has no effect on other models.
seara/rubert-tiny2-russian-sentiment
seara
"2023-08-25T19:16:11Z"
57,865
13
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "sentiment-analysis", "multi-class-classification", "sentiment analysis", "rubert", "sentiment", "tiny", "russian", "multiclass", "classification", "ru", "dataset:sismetanin/rureviews", "dataset:RuSentiment", "dataset:LinisCrowd2015", "dataset:LinisCrowd2016", "dataset:KaggleRussianNews", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-05-14T15:21:52Z"
--- license: mit language: - ru metrics: - f1 - roc_auc - precision - recall pipeline_tag: text-classification tags: - sentiment-analysis - multi-class-classification - sentiment analysis - rubert - sentiment - bert - tiny - russian - multiclass - classification datasets: - sismetanin/rureviews - RuSentiment - LinisCrowd2015 - LinisCrowd2016 - KaggleRussianNews --- This is [RuBERT-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for __sentiment classification__ of short __Russian__ texts. The task is a __multi-class classification__ with the following labels: ```yaml 0: neutral 1: positive 2: negative ``` Label to Russian label: ```yaml neutral: нейтральный positive: позитивный negative: негативный ``` ## Usage ```python from transformers import pipeline model = pipeline(model="seara/rubert-tiny2-russian-sentiment") model("Привет, ты мне нравишься!") # [{'label': 'positive', 'score': 0.9398769736289978}] ``` ## Dataset This model was trained on the union of the following datasets: - Kaggle Russian News Dataset - Linis Crowd 2015 - Linis Crowd 2016 - RuReviews - RuSentiment An overview of the training data can be found on [S. Smetanin Github repository](https://github.com/sismetanin/sentiment-analysis-in-russian). __Download links for all Russian sentiment datasets collected by Smetanin can be found in this [repository](https://github.com/searayeah/russian-sentiment-emotion-datasets).__ ## Training Training were done in this [project](https://github.com/searayeah/bert-russian-sentiment-emotion) with this parameters: ```yaml tokenizer.max_length: 512 batch_size: 64 optimizer: adam lr: 0.00001 weight_decay: 0 epochs: 5 ``` Train/validation/test splits are 80%/10%/10%. ## Eval results (on test split) | |neutral|positive|negative|macro avg|weighted avg| |---------|-------|--------|--------|---------|------------| |precision|0.7 |0.84 |0.74 |0.76 |0.75 | |recall |0.74 |0.83 |0.69 |0.75 |0.75 | |f1-score |0.72 |0.83 |0.71 |0.75 |0.75 | |auc-roc |0.85 |0.95 |0.91 |0.9 |0.9 | |support |5196 |3831 |3599 |12626 |12626 |
colorfulscoop/sbert-base-ja
colorfulscoop
"2021-08-08T06:47:42Z"
57,843
13
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "ja", "arxiv:1908.10084", "license:cc-by-sa-4.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: ja pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity widget: source_sentence: "走るのが趣味です" sentences: - 外をランニングするのが好きです - 運動はそこそこです - 走るのは嫌いです license: cc-by-sa-4.0 --- # Sentence BERT base Japanese model This repository contains a Sentence BERT base model for Japanese. ## Pretrained model This model utilizes a Japanese BERT model [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) v1.0 released under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) as a pretrained model. ## Training data [Japanese SNLI dataset](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) released under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/) is used for training. Original training dataset is splitted into train/valid dataset. Finally, follwoing data is prepared. * Train data: 523,005 samples * Valid data: 10,000 samples * Test data: 3,916 samples ## Model description This model utilizes `SentenceTransformer` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) . The model detail is as below. ```py >>> from sentence_transformers import SentenceTransformer >>> SentenceTransformer("colorfulscoop/sbert-base-ja") SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Training This model finetuned [colorfulscoop/bert-base-ja](https://huggingface.co/colorfulscoop/bert-base-ja) with Softmax classifier of 3 labels of SNLI. AdamW optimizer with learning rate of 2e-05 linearly warmed-up in 10% of train data was used. The model was trained in 1 epoch with batch size 8. Note: in a original paper of [Sentence BERT](https://arxiv.org/abs/1908.10084), a batch size of the model trained on SNLI and Multi-Genle NLI was 16. In this model, the dataset is around half smaller than the origial one, therefore the batch size was set to half of the original batch size of 16. Trainind was conducted on Ubuntu 18.04.5 LTS with one RTX 2080 Ti. After training, test set accuracy reached to 0.8529. Training code is available in [a GitHub repository](https://github.com/colorfulscoop/sbert-ja). ## Usage First, install dependecies. ```sh $ pip install sentence-transformers==2.0.0 ``` Then initialize `SentenceTransformer` model and use `encode` method to convert to vectors. ```py >>> from sentence_transformers import SentenceTransformer >>> model = SentenceTransformer("colorfulscoop/sbert-base-ja") >>> sentences = ["外をランニングするのが好きです", "海外旅行に行くのが趣味です"] >>> model.encode(sentences) ``` ## License Copyright (c) 2021 Colorful Scoop All the models included in this repository are licensed under [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). **Disclaimer:** Use of this model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. --- This model utilizes the folllowing pretrained model. * **Name:** bert-base-ja * **Credit:** (c) 2021 Colorful Scoop * **License:** [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) * **Disclaimer:** The model potentially has possibility that it generates similar texts in the training data, texts not to be true, or biased texts. Use of the model is at your sole risk. Colorful Scoop makes no warranty or guarantee of any outputs from the model. Colorful Scoop is not liable for any trouble, loss, or damage arising from the model output. * **Link:** https://huggingface.co/colorfulscoop/bert-base-ja --- This model utilizes the following data for fine-tuning. * **Name:** 日本語SNLI(JSNLI)データセット * **Credit:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) * **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) * **Link:** [https://nlp.ist.i.kyoto-u.ac.jp/index.php?日本語SNLI(JSNLI)データセット](https://nlp.ist.i.kyoto-u.ac.jp/index.php?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
microsoft/wavlm-base-plus-sd
microsoft
"2022-03-25T12:06:46Z"
57,819
7
transformers
[ "transformers", "pytorch", "wavlm", "audio-frame-classification", "speech", "en", "arxiv:1912.07875", "arxiv:2106.06909", "arxiv:2101.00390", "arxiv:2110.13900", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en tags: - speech --- # WavLM-Base-Plus for Speaker Diarization [Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm) The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. The model was pre-trained on: - 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875) - 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909) - 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390) [Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei **Abstract** *Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.* The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm. # Fine-tuning details The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs. # Usage ## Speaker Diarization ```python from transformers import Wav2Vec2FeatureExtractor, WavLMForAudioFrameClassification from datasets import load_dataset import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-plus-sd') model = WavLMForAudioFrameClassification.from_pretrained('microsoft/wavlm-base-plus-sd') # audio file is decoded on the fly inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt") logits = model(**inputs).logits probabilities = torch.sigmoid(logits[0]) # labels is a one-hot array of shape (num_frames, num_speakers) labels = (probabilities > 0.5).long() ``` # License The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE) ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wavlm.png)
timm/tf_efficientnetv2_xl.in21k_ft_in1k
timm
"2023-04-27T22:18:18Z"
57,795
3
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2104.00298", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:20:45Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for tf_efficientnetv2_xl.in21k_ft_in1k A EfficientNet-v2 image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 208.1 - GMACs: 52.8 - Activations (M): 139.2 - Image size: train = 384 x 384, test = 512 x 512 - **Papers:** - EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 192, 192]) # torch.Size([1, 64, 96, 96]) # torch.Size([1, 96, 48, 48]) # torch.Size([1, 256, 24, 24]) # torch.Size([1, 640, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2021efficientnetv2, title={Efficientnetv2: Smaller models and faster training}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={10096--10106}, year={2021}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
NousResearch/Meta-Llama-3-8B-Instruct
NousResearch
"2024-07-23T04:40:46Z"
57,735
79
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-18T16:55:56Z"
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Hello messages: - role: user content: Hey my name is Julien! How are you? - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
Corcelio/mobius
Corcelio
"2024-06-01T13:43:40Z"
57,675
222
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-12T16:01:24Z"
--- pipeline_tag: text-to-image widget: - text: >- movie scene screencap, cinematic footage. thanos smelling a little yellow rose. extreme wide angle, output: url: images/1man.png - text: god output: url: images/god.png - text: 'A tiny robot taking a break under a tree in the garden ' output: url: images/robot.png - text: mystery output: url: images/mystery.png - text: a cat wearing sunglasses in the summer output: url: images/cat.png - text: 'robot holding a sign that says ’a storm is coming’ ' output: url: images/storm.png - text: >- The Exegenesis of the soul, captured within a boundless well of starlight, pulsating and vibrating wisps, chiaroscuro, humming transformer output: url: images/soul.png - text: >- anime boy, protagonist, best quality output: url: images/animeboy.png - text: natural photography of a man, glasses, cinematic, output: url: images/glasses.png - text: if I could turn back time output: url: images/time.png - text: >- ("Mobius" text logo) powerful aura, swirling power, cinematic output: url: images/mobius.png - text: the backrooms output: url: images/backrooms.png license: apache-2.0 --- <Gallery /> # Mobius: Redefining State-of-the-Art in Debiased Diffusion Models Mobius, a diffusion model that pushes the boundaries of domain-agnostic debiasing and representation realignment. By employing a brand new constructive deconstruction framework, Mobius achieves unrivaled generalization across a vast array of styles and domains, eliminating the need for expensive pretraining from scratch. # Domain-Agnostic Debiasing: A Groundbreaking Approach Domain-agnostic debiasing is a novel technique pioneered Corcel. This innovative approach aims to remove biases inherent in diffusion models without limiting their ability to generalize across diverse domains. Traditional debiasing methods often focus on specific domains or styles, resulting in models that struggle to adapt to new or unseen contexts. In contrast, domain-agnostic debiasing ensures that the model remains unbiased while maintaining its versatility and adaptability. The key to domain-agnostic debiasing lies in the constructive deconstruction framework. This framework allows for fine-grained reworking of biases and representations without the need for pretraining from scratch. The technical details of this groundbreaking approach will be discussed in an upcoming research paper, "Constructive Deconstruction: Domain-Agnostic Debiasing of Diffusion Models," which will be made available on the Corcel.io website and through scientific publications. By applying domain-agnostic debiasing, Mobius sets a new standard for fairness and impartiality in image generation while maintaining its exceptional ability to adapt to a wide range of styles and domains. # Surpassing the State-of-the-Art Mobius outperforms existing state-of-the-art diffusion models in several key areas: Unbiased generation: Mobius generates images that are virtually free from the inherent biases commonly found in other diffusion models, setting a new benchmark for fairness and impartiality across all domains. Exceptional generalization: With its unparalleled ability to adapt to an extensive range of styles and domains, Mobius consistently delivers top-quality results, surpassing the limitations of previous models. Efficient fine-tuning: The Mobius base model serves as a superior foundation for creating specialized models tailored to specific tasks or domains, requiring significantly less fine-tuning and computational resources compared to other state-of-the-art models. # Recommendations - CFG between 3.5 and 7 - 3.5 for extreme realism and skin detailing - 7 for artstic, anime, surrealism, and so on. - Requires a CLIP skip of -3 - Sampler: DPM++ 3M SDE - Scheduler: Karras - Steps: 50 - Resolution: 1024x1024 Please also consider using these keep words to improve your prompts: best quality, HD, '~*~aesthetic~*~'. # Use it with 🧨 diffusers ```python import torch from diffusers import ( StableDiffusionXLPipeline, KDPM2AncestralDiscreteScheduler, AutoencoderKL ) # Load VAE component vae = AutoencoderKL.from_pretrained( "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16 ) # Configure the pipeline pipe = StableDiffusionXLPipeline.from_pretrained( "Corcelio/mobius", vae=vae, torch_dtype=torch.float16 ) pipe.scheduler = KDPM2AncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe.to('cuda') # Define prompts and generate image prompt = "mystery" negative_prompt = "" image = pipe( prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=7, num_inference_steps=50, clip_skip=3 ).images[0] image.save("generated_image.png") ``` # Credits Made by Corcel [ https://corcel.io/ ]
charsiu/g2p_multilingual_byT5_tiny_16_layers_100
charsiu
"2022-08-27T16:25:51Z"
57,582
4
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-08-27T16:22:22Z"
Entry not found
liuhaotian/llava-v1.6-mistral-7b
liuhaotian
"2024-05-08T12:12:36Z"
57,578
218
transformers
[ "transformers", "safetensors", "llava_mistral", "text-generation", "image-text-to-text", "license:apache-2.0", "autotrain_compatible", "region:us" ]
image-text-to-text
"2024-01-31T04:20:00Z"
--- inference: false license: apache-2.0 pipeline_tag: image-text-to-text --- <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) **Model date:** LLaVA-v1.6-Mistral-7B was trained in December 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
unsloth/mistral-7b-instruct-v0.3-bnb-4bit
unsloth
"2024-09-03T03:45:59Z"
57,499
14
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "mistral-7b", "mistral-instruct", "instruct", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-05-22T18:14:39Z"
--- language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - mistral - mistral-7b - mistral-instruct - instruct --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
dccuchile/bert-base-spanish-wwm-cased
dccuchile
"2024-01-18T01:47:12Z"
57,417
50
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "masked-lm", "es", "arxiv:1904.09077", "arxiv:1906.01502", "arxiv:1812.10464", "arxiv:1901.07291", "arxiv:1904.02099", "arxiv:1906.01569", "arxiv:1908.11828", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: - es tags: - masked-lm --- # BETO: Spanish BERT BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models. ## Download | | | | | |-|:--------:|:-----:|:----:| |BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) | |BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) | All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps. ## Benchmarks The following table shows some BETO results in the Spanish version of every task. We compare BETO (cased and uncased) with the Best Multilingual BERT results that we found in the literature (as of October 2019). The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods). References for all methods can be found [here](#references). |Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results | |-------|--------------:|--------------:|--------------------------:|-------------------------------:| |[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] | |[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] | |[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] | |[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] | |[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]| ## Example of use For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html). BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library. An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing). (We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉) ## Acknowledgments We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/) that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program. ## Citation [Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf) To cite this resource in a publication please use the following: ``` @inproceedings{CaneteCFP2020, title={Spanish Pre-Trained BERT Model and Evaluation Data}, author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge}, booktitle={PML4DC at ICLR 2020}, year={2020} } ``` ## License Disclaimer The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs. ## References * [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) * [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf) * [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf) * [4] [LASER](https://arxiv.org/abs/1812.10464) * [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf) * [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf) * [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf) * [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
Qdrant/all-MiniLM-L6-v2-onnx
Qdrant
"2024-07-15T15:10:38Z"
57,257
3
transformers
[ "transformers", "onnx", "bert", "feature-extraction", "sentence-similarity", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-01-16T08:09:23Z"
--- license: apache-2.0 pipeline_tag: sentence-similarity --- ONNX port of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) for text classification and similarity searches. ### Usage Here's an example of performing inference using the model with [FastEmbed](https://github.com/qdrant/fastembed). > Note: This model is supposed to be used with Qdrant. Vectors have to be configured with [Modifier.IDF](https://qdrant.tech/documentation/concepts/indexing/?q=modifier#idf-modifier). ```py from fastembed import TextEmbedding documents = [ "You should stay, study and sprint.", "History can only prepare us to be surprised yet again.", ] model = TextEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2") embeddings = list(model.embed(documents)) # [ # array([ # 0.00611658, 0.00068912, -0.0203846, ..., -0.01751488, -0.01174267, # 0.01463472 # ], # dtype=float32), # array([ # 0.00173448, -0.00329958, 0.01557874, ..., -0.01473586, 0.0281806, # -0.00448205 # ], # dtype=float32) # ] ```
Snowflake/snowflake-arctic-embed-xs
Snowflake
"2024-08-20T18:13:38Z"
57,094
28
sentence-transformers
[ "sentence-transformers", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "arctic", "snowflake-arctic-embed", "transformers.js", "arxiv:2407.18887", "arxiv:2405.05374", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-12T13:54:17Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - arctic - snowflake-arctic-embed - transformers.js model-index: - name: snowflake-snowflake-arctic-embed-xs results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 65.08955223880598 - type: ap value: 28.514291209445364 - type: f1 value: 59.2604580112738 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 70.035375 - type: ap value: 64.29444264250405 - type: f1 value: 69.78382333907138 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 35.343999999999994 - type: f1 value: 34.69618251902858 - task: type: Retrieval dataset: type: mteb/arguana name: MTEB ArguAna config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 28.592000000000002 - type: map_at_10 value: 43.597 - type: map_at_100 value: 44.614 - type: map_at_1000 value: 44.624 - type: map_at_3 value: 38.928000000000004 - type: map_at_5 value: 41.453 - type: mrr_at_1 value: 29.232000000000003 - type: mrr_at_10 value: 43.829 - type: mrr_at_100 value: 44.852 - type: mrr_at_1000 value: 44.862 - type: mrr_at_3 value: 39.118 - type: mrr_at_5 value: 41.703 - type: ndcg_at_1 value: 28.592000000000002 - type: ndcg_at_10 value: 52.081 - type: ndcg_at_100 value: 56.37 - type: ndcg_at_1000 value: 56.598000000000006 - type: ndcg_at_3 value: 42.42 - type: ndcg_at_5 value: 46.965 - type: precision_at_1 value: 28.592000000000002 - type: precision_at_10 value: 7.922999999999999 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 17.52 - type: precision_at_5 value: 12.717 - type: recall_at_1 value: 28.592000000000002 - type: recall_at_10 value: 79.232 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 52.559999999999995 - type: recall_at_5 value: 63.585 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.50220588953974 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.08725826118282 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.25381587694928 - type: mrr value: 73.79776194873148 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.47489332445278 - type: cos_sim_spearman value: 84.05432487336698 - type: euclidean_pearson value: 84.5108222177219 - type: euclidean_spearman value: 84.05432487336698 - type: manhattan_pearson value: 84.20440618321464 - type: manhattan_spearman value: 83.9290208134097 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 76.37337662337663 - type: f1 value: 75.33296834885043 - task: type: Clustering dataset: type: jinaai/big-patent-clustering name: MTEB BigPatentClustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 21.31174373264835 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 34.481973521597844 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 26.14094256567341 - task: type: Retrieval dataset: type: mteb/cqadupstack-android name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 32.527 - type: map_at_10 value: 43.699 - type: map_at_100 value: 45.03 - type: map_at_1000 value: 45.157000000000004 - type: map_at_3 value: 39.943 - type: map_at_5 value: 42.324 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.277 - type: mrr_at_100 value: 49.956 - type: mrr_at_1000 value: 50.005 - type: mrr_at_3 value: 46.304 - type: mrr_at_5 value: 48.493 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 49.957 - type: ndcg_at_100 value: 54.678000000000004 - type: ndcg_at_1000 value: 56.751 - type: ndcg_at_3 value: 44.608 - type: ndcg_at_5 value: 47.687000000000005 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.5010000000000001 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 21.173000000000002 - type: precision_at_5 value: 15.794 - type: recall_at_1 value: 32.527 - type: recall_at_10 value: 61.791 - type: recall_at_100 value: 81.49300000000001 - type: recall_at_1000 value: 95.014 - type: recall_at_3 value: 46.605000000000004 - type: recall_at_5 value: 54.83 - task: type: Retrieval dataset: type: mteb/cqadupstack-english name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 29.424 - type: map_at_10 value: 38.667 - type: map_at_100 value: 39.771 - type: map_at_1000 value: 39.899 - type: map_at_3 value: 35.91 - type: map_at_5 value: 37.45 - type: mrr_at_1 value: 36.687999999999995 - type: mrr_at_10 value: 44.673 - type: mrr_at_100 value: 45.289 - type: mrr_at_1000 value: 45.338 - type: mrr_at_3 value: 42.601 - type: mrr_at_5 value: 43.875 - type: ndcg_at_1 value: 36.687999999999995 - type: ndcg_at_10 value: 44.013000000000005 - type: ndcg_at_100 value: 48.13 - type: ndcg_at_1000 value: 50.294000000000004 - type: ndcg_at_3 value: 40.056999999999995 - type: ndcg_at_5 value: 41.902 - type: precision_at_1 value: 36.687999999999995 - type: precision_at_10 value: 8.158999999999999 - type: precision_at_100 value: 1.321 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 19.045 - type: precision_at_5 value: 13.427 - type: recall_at_1 value: 29.424 - type: recall_at_10 value: 53.08500000000001 - type: recall_at_100 value: 70.679 - type: recall_at_1000 value: 84.66 - type: recall_at_3 value: 41.399 - type: recall_at_5 value: 46.632 - task: type: Retrieval dataset: type: mteb/cqadupstack-gaming name: MTEB CQADupstackGamingRetrieval config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.747 - type: map_at_10 value: 51.452 - type: map_at_100 value: 52.384 - type: map_at_1000 value: 52.437 - type: map_at_3 value: 48.213 - type: map_at_5 value: 50.195 - type: mrr_at_1 value: 45.391999999999996 - type: mrr_at_10 value: 54.928 - type: mrr_at_100 value: 55.532000000000004 - type: mrr_at_1000 value: 55.565 - type: mrr_at_3 value: 52.456 - type: mrr_at_5 value: 54.054 - type: ndcg_at_1 value: 45.391999999999996 - type: ndcg_at_10 value: 57.055 - type: ndcg_at_100 value: 60.751999999999995 - type: ndcg_at_1000 value: 61.864 - type: ndcg_at_3 value: 51.662 - type: ndcg_at_5 value: 54.613 - type: precision_at_1 value: 45.391999999999996 - type: precision_at_10 value: 9.103 - type: precision_at_100 value: 1.1780000000000002 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 22.717000000000002 - type: precision_at_5 value: 15.812000000000001 - type: recall_at_1 value: 39.747 - type: recall_at_10 value: 70.10499999999999 - type: recall_at_100 value: 86.23100000000001 - type: recall_at_1000 value: 94.025 - type: recall_at_3 value: 55.899 - type: recall_at_5 value: 63.05500000000001 - task: type: Retrieval dataset: type: mteb/cqadupstack-gis name: MTEB CQADupstackGisRetrieval config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.168999999999997 - type: map_at_10 value: 34.975 - type: map_at_100 value: 35.94 - type: map_at_1000 value: 36.021 - type: map_at_3 value: 32.35 - type: map_at_5 value: 33.831 - type: mrr_at_1 value: 28.701 - type: mrr_at_10 value: 36.698 - type: mrr_at_100 value: 37.546 - type: mrr_at_1000 value: 37.613 - type: mrr_at_3 value: 34.256 - type: mrr_at_5 value: 35.685 - type: ndcg_at_1 value: 28.701 - type: ndcg_at_10 value: 39.639 - type: ndcg_at_100 value: 44.389 - type: ndcg_at_1000 value: 46.46 - type: ndcg_at_3 value: 34.52 - type: ndcg_at_5 value: 37.076 - type: precision_at_1 value: 28.701 - type: precision_at_10 value: 5.955 - type: precision_at_100 value: 0.8880000000000001 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 14.274999999999999 - type: precision_at_5 value: 10.011000000000001 - type: recall_at_1 value: 27.168999999999997 - type: recall_at_10 value: 52.347 - type: recall_at_100 value: 74.1 - type: recall_at_1000 value: 89.739 - type: recall_at_3 value: 38.567 - type: recall_at_5 value: 44.767 - task: type: Retrieval dataset: type: mteb/cqadupstack-mathematica name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 15.872 - type: map_at_10 value: 23.153000000000002 - type: map_at_100 value: 24.311 - type: map_at_1000 value: 24.432000000000002 - type: map_at_3 value: 20.707 - type: map_at_5 value: 21.921 - type: mrr_at_1 value: 19.776 - type: mrr_at_10 value: 27.755999999999997 - type: mrr_at_100 value: 28.709 - type: mrr_at_1000 value: 28.778 - type: mrr_at_3 value: 25.186999999999998 - type: mrr_at_5 value: 26.43 - type: ndcg_at_1 value: 19.776 - type: ndcg_at_10 value: 28.288999999999998 - type: ndcg_at_100 value: 34.011 - type: ndcg_at_1000 value: 36.916 - type: ndcg_at_3 value: 23.551 - type: ndcg_at_5 value: 25.429000000000002 - type: precision_at_1 value: 19.776 - type: precision_at_10 value: 5.311 - type: precision_at_100 value: 0.9440000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.360000000000001 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 15.872 - type: recall_at_10 value: 39.726 - type: recall_at_100 value: 65.035 - type: recall_at_1000 value: 85.846 - type: recall_at_3 value: 26.432 - type: recall_at_5 value: 31.22 - task: type: Retrieval dataset: type: mteb/cqadupstack-physics name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 28.126 - type: map_at_10 value: 37.537 - type: map_at_100 value: 38.807 - type: map_at_1000 value: 38.923 - type: map_at_3 value: 34.65 - type: map_at_5 value: 36.248000000000005 - type: mrr_at_1 value: 34.649 - type: mrr_at_10 value: 42.893 - type: mrr_at_100 value: 43.721 - type: mrr_at_1000 value: 43.775999999999996 - type: mrr_at_3 value: 40.488 - type: mrr_at_5 value: 41.729 - type: ndcg_at_1 value: 34.649 - type: ndcg_at_10 value: 43.072 - type: ndcg_at_100 value: 48.464 - type: ndcg_at_1000 value: 50.724000000000004 - type: ndcg_at_3 value: 38.506 - type: ndcg_at_5 value: 40.522000000000006 - type: precision_at_1 value: 34.649 - type: precision_at_10 value: 7.68 - type: precision_at_100 value: 1.214 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 18.029999999999998 - type: precision_at_5 value: 12.666 - type: recall_at_1 value: 28.126 - type: recall_at_10 value: 54.396 - type: recall_at_100 value: 76.988 - type: recall_at_1000 value: 91.85799999999999 - type: recall_at_3 value: 41.169 - type: recall_at_5 value: 46.658 - task: type: Retrieval dataset: type: mteb/cqadupstack-programmers name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 26.68 - type: map_at_10 value: 35.702 - type: map_at_100 value: 36.864999999999995 - type: map_at_1000 value: 36.977 - type: map_at_3 value: 32.828 - type: map_at_5 value: 34.481 - type: mrr_at_1 value: 32.991 - type: mrr_at_10 value: 40.993 - type: mrr_at_100 value: 41.827 - type: mrr_at_1000 value: 41.887 - type: mrr_at_3 value: 38.623000000000005 - type: mrr_at_5 value: 40.021 - type: ndcg_at_1 value: 32.991 - type: ndcg_at_10 value: 41.036 - type: ndcg_at_100 value: 46.294000000000004 - type: ndcg_at_1000 value: 48.644 - type: ndcg_at_3 value: 36.419000000000004 - type: ndcg_at_5 value: 38.618 - type: precision_at_1 value: 32.991 - type: precision_at_10 value: 7.385999999999999 - type: precision_at_100 value: 1.176 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 17.122999999999998 - type: precision_at_5 value: 12.215 - type: recall_at_1 value: 26.68 - type: recall_at_10 value: 51.644 - type: recall_at_100 value: 74.55000000000001 - type: recall_at_1000 value: 90.825 - type: recall_at_3 value: 38.579 - type: recall_at_5 value: 44.512 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 26.30825 - type: map_at_10 value: 34.97866666666666 - type: map_at_100 value: 36.109249999999996 - type: map_at_1000 value: 36.22508333333333 - type: map_at_3 value: 32.239083333333326 - type: map_at_5 value: 33.75933333333334 - type: mrr_at_1 value: 31.05308333333333 - type: mrr_at_10 value: 39.099833333333336 - type: mrr_at_100 value: 39.92008333333334 - type: mrr_at_1000 value: 39.980000000000004 - type: mrr_at_3 value: 36.75958333333333 - type: mrr_at_5 value: 38.086416666666665 - type: ndcg_at_1 value: 31.05308333333333 - type: ndcg_at_10 value: 40.11558333333334 - type: ndcg_at_100 value: 45.05966666666667 - type: ndcg_at_1000 value: 47.36516666666667 - type: ndcg_at_3 value: 35.490833333333335 - type: ndcg_at_5 value: 37.64541666666666 - type: precision_at_1 value: 31.05308333333333 - type: precision_at_10 value: 6.968416666666666 - type: precision_at_100 value: 1.1156666666666666 - type: precision_at_1000 value: 0.14950000000000002 - type: precision_at_3 value: 16.123 - type: precision_at_5 value: 11.451166666666666 - type: recall_at_1 value: 26.30825 - type: recall_at_10 value: 51.19283333333333 - type: recall_at_100 value: 73.0285 - type: recall_at_1000 value: 89.11133333333333 - type: recall_at_3 value: 38.26208333333333 - type: recall_at_5 value: 43.855916666666666 - task: type: Retrieval dataset: type: mteb/cqadupstack-stats name: MTEB CQADupstackStatsRetrieval config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 23.363999999999997 - type: map_at_10 value: 30.606 - type: map_at_100 value: 31.491999999999997 - type: map_at_1000 value: 31.578 - type: map_at_3 value: 28.610000000000003 - type: map_at_5 value: 29.602 - type: mrr_at_1 value: 26.38 - type: mrr_at_10 value: 33.472 - type: mrr_at_100 value: 34.299 - type: mrr_at_1000 value: 34.361999999999995 - type: mrr_at_3 value: 31.696999999999996 - type: mrr_at_5 value: 32.503 - type: ndcg_at_1 value: 26.38 - type: ndcg_at_10 value: 34.772999999999996 - type: ndcg_at_100 value: 39.334 - type: ndcg_at_1000 value: 41.676 - type: ndcg_at_3 value: 31.097 - type: ndcg_at_5 value: 32.561 - type: precision_at_1 value: 26.38 - type: precision_at_10 value: 5.475 - type: precision_at_100 value: 0.84 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 13.395000000000001 - type: precision_at_5 value: 9.11 - type: recall_at_1 value: 23.363999999999997 - type: recall_at_10 value: 44.656 - type: recall_at_100 value: 65.77199999999999 - type: recall_at_1000 value: 83.462 - type: recall_at_3 value: 34.213 - type: recall_at_5 value: 38.091 - task: type: Retrieval dataset: type: mteb/cqadupstack-tex name: MTEB CQADupstackTexRetrieval config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 17.971999999999998 - type: map_at_10 value: 24.913 - type: map_at_100 value: 25.916 - type: map_at_1000 value: 26.049 - type: map_at_3 value: 22.569 - type: map_at_5 value: 23.858999999999998 - type: mrr_at_1 value: 21.748 - type: mrr_at_10 value: 28.711 - type: mrr_at_100 value: 29.535 - type: mrr_at_1000 value: 29.621 - type: mrr_at_3 value: 26.484999999999996 - type: mrr_at_5 value: 27.701999999999998 - type: ndcg_at_1 value: 21.748 - type: ndcg_at_10 value: 29.412 - type: ndcg_at_100 value: 34.204 - type: ndcg_at_1000 value: 37.358000000000004 - type: ndcg_at_3 value: 25.202 - type: ndcg_at_5 value: 27.128000000000004 - type: precision_at_1 value: 21.748 - type: precision_at_10 value: 5.279 - type: precision_at_100 value: 0.902 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 11.551 - type: precision_at_5 value: 8.437999999999999 - type: recall_at_1 value: 17.971999999999998 - type: recall_at_10 value: 39.186 - type: recall_at_100 value: 60.785999999999994 - type: recall_at_1000 value: 83.372 - type: recall_at_3 value: 27.584999999999997 - type: recall_at_5 value: 32.448 - task: type: Retrieval dataset: type: mteb/cqadupstack-unix name: MTEB CQADupstackUnixRetrieval config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 26.684 - type: map_at_10 value: 35.188 - type: map_at_100 value: 36.379 - type: map_at_1000 value: 36.481 - type: map_at_3 value: 32.401 - type: map_at_5 value: 34.132 - type: mrr_at_1 value: 31.063000000000002 - type: mrr_at_10 value: 39.104 - type: mrr_at_100 value: 40.062999999999995 - type: mrr_at_1000 value: 40.119 - type: mrr_at_3 value: 36.692 - type: mrr_at_5 value: 38.161 - type: ndcg_at_1 value: 31.063000000000002 - type: ndcg_at_10 value: 40.096 - type: ndcg_at_100 value: 45.616 - type: ndcg_at_1000 value: 47.869 - type: ndcg_at_3 value: 35.256 - type: ndcg_at_5 value: 37.826 - type: precision_at_1 value: 31.063000000000002 - type: precision_at_10 value: 6.622999999999999 - type: precision_at_100 value: 1.046 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 15.641 - type: precision_at_5 value: 11.231 - type: recall_at_1 value: 26.684 - type: recall_at_10 value: 51.092999999999996 - type: recall_at_100 value: 75.099 - type: recall_at_1000 value: 90.644 - type: recall_at_3 value: 38.063 - type: recall_at_5 value: 44.518 - task: type: Retrieval dataset: type: mteb/cqadupstack-webmasters name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 26.249 - type: map_at_10 value: 34.694 - type: map_at_100 value: 36.208 - type: map_at_1000 value: 36.443 - type: map_at_3 value: 31.868000000000002 - type: map_at_5 value: 33.018 - type: mrr_at_1 value: 31.818 - type: mrr_at_10 value: 39.416000000000004 - type: mrr_at_100 value: 40.327 - type: mrr_at_1000 value: 40.388000000000005 - type: mrr_at_3 value: 37.120999999999995 - type: mrr_at_5 value: 38.07 - type: ndcg_at_1 value: 31.818 - type: ndcg_at_10 value: 40.405 - type: ndcg_at_100 value: 45.816 - type: ndcg_at_1000 value: 48.403 - type: ndcg_at_3 value: 35.823 - type: ndcg_at_5 value: 37.191 - type: precision_at_1 value: 31.818 - type: precision_at_10 value: 7.806 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 16.535 - type: precision_at_5 value: 11.738999999999999 - type: recall_at_1 value: 26.249 - type: recall_at_10 value: 50.928 - type: recall_at_100 value: 75.271 - type: recall_at_1000 value: 91.535 - type: recall_at_3 value: 37.322 - type: recall_at_5 value: 41.318 - task: type: Retrieval dataset: type: mteb/cqadupstack-wordpress name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 21.884999999999998 - type: map_at_10 value: 29.158 - type: map_at_100 value: 30.208000000000002 - type: map_at_1000 value: 30.304 - type: map_at_3 value: 26.82 - type: map_at_5 value: 28.051 - type: mrr_at_1 value: 23.66 - type: mrr_at_10 value: 31.277 - type: mrr_at_100 value: 32.237 - type: mrr_at_1000 value: 32.308 - type: mrr_at_3 value: 29.205 - type: mrr_at_5 value: 30.314000000000004 - type: ndcg_at_1 value: 23.66 - type: ndcg_at_10 value: 33.64 - type: ndcg_at_100 value: 39.028 - type: ndcg_at_1000 value: 41.423 - type: ndcg_at_3 value: 29.189 - type: ndcg_at_5 value: 31.191999999999997 - type: precision_at_1 value: 23.66 - type: precision_at_10 value: 5.287 - type: precision_at_100 value: 0.86 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 12.631 - type: precision_at_5 value: 8.762 - type: recall_at_1 value: 21.884999999999998 - type: recall_at_10 value: 45.357 - type: recall_at_100 value: 70.338 - type: recall_at_1000 value: 88.356 - type: recall_at_3 value: 33.312000000000005 - type: recall_at_5 value: 38.222 - task: type: Retrieval dataset: type: mteb/climate-fever name: MTEB ClimateFEVER config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 13.058 - type: map_at_10 value: 21.549 - type: map_at_100 value: 23.287 - type: map_at_1000 value: 23.444000000000003 - type: map_at_3 value: 18.18 - type: map_at_5 value: 19.886 - type: mrr_at_1 value: 28.73 - type: mrr_at_10 value: 40.014 - type: mrr_at_100 value: 40.827000000000005 - type: mrr_at_1000 value: 40.866 - type: mrr_at_3 value: 36.602000000000004 - type: mrr_at_5 value: 38.702 - type: ndcg_at_1 value: 28.73 - type: ndcg_at_10 value: 29.881 - type: ndcg_at_100 value: 36.662 - type: ndcg_at_1000 value: 39.641999999999996 - type: ndcg_at_3 value: 24.661 - type: ndcg_at_5 value: 26.548 - type: precision_at_1 value: 28.73 - type: precision_at_10 value: 9.094 - type: precision_at_100 value: 1.6480000000000001 - type: precision_at_1000 value: 0.22100000000000003 - type: precision_at_3 value: 17.98 - type: precision_at_5 value: 13.811000000000002 - type: recall_at_1 value: 13.058 - type: recall_at_10 value: 35.458 - type: recall_at_100 value: 58.719 - type: recall_at_1000 value: 75.495 - type: recall_at_3 value: 22.607 - type: recall_at_5 value: 28.067999999999998 - task: type: Retrieval dataset: type: mteb/dbpedia name: MTEB DBPedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 8.811 - type: map_at_10 value: 19.134999999999998 - type: map_at_100 value: 26.905 - type: map_at_1000 value: 28.503 - type: map_at_3 value: 13.863 - type: map_at_5 value: 16.062 - type: mrr_at_1 value: 67 - type: mrr_at_10 value: 74.607 - type: mrr_at_100 value: 74.941 - type: mrr_at_1000 value: 74.954 - type: mrr_at_3 value: 73.042 - type: mrr_at_5 value: 73.992 - type: ndcg_at_1 value: 52.87500000000001 - type: ndcg_at_10 value: 40.199 - type: ndcg_at_100 value: 44.901 - type: ndcg_at_1000 value: 52.239999999999995 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.137 - type: precision_at_1 value: 67 - type: precision_at_10 value: 31.8 - type: precision_at_100 value: 10.315000000000001 - type: precision_at_1000 value: 2.0420000000000003 - type: precision_at_3 value: 48.667 - type: precision_at_5 value: 40.9 - type: recall_at_1 value: 8.811 - type: recall_at_10 value: 24.503 - type: recall_at_100 value: 51.288999999999994 - type: recall_at_1000 value: 74.827 - type: recall_at_3 value: 15.254999999999999 - type: recall_at_5 value: 18.698999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 41.839999999999996 - type: f1 value: 37.78718146306379 - task: type: Retrieval dataset: type: mteb/fever name: MTEB FEVER config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 68.47999999999999 - type: map_at_10 value: 78.782 - type: map_at_100 value: 79.021 - type: map_at_1000 value: 79.035 - type: map_at_3 value: 77.389 - type: map_at_5 value: 78.347 - type: mrr_at_1 value: 73.837 - type: mrr_at_10 value: 83.41499999999999 - type: mrr_at_100 value: 83.53399999999999 - type: mrr_at_1000 value: 83.535 - type: mrr_at_3 value: 82.32300000000001 - type: mrr_at_5 value: 83.13000000000001 - type: ndcg_at_1 value: 73.837 - type: ndcg_at_10 value: 83.404 - type: ndcg_at_100 value: 84.287 - type: ndcg_at_1000 value: 84.52199999999999 - type: ndcg_at_3 value: 81.072 - type: ndcg_at_5 value: 82.537 - type: precision_at_1 value: 73.837 - type: precision_at_10 value: 10.254000000000001 - type: precision_at_100 value: 1.088 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.538 - type: precision_at_5 value: 19.811 - type: recall_at_1 value: 68.47999999999999 - type: recall_at_10 value: 92.98100000000001 - type: recall_at_100 value: 96.50800000000001 - type: recall_at_1000 value: 97.925 - type: recall_at_3 value: 86.764 - type: recall_at_5 value: 90.39 - task: type: Retrieval dataset: type: mteb/fiqa name: MTEB FiQA2018 config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 16.786 - type: map_at_10 value: 26.97 - type: map_at_100 value: 28.488000000000003 - type: map_at_1000 value: 28.665000000000003 - type: map_at_3 value: 23.3 - type: map_at_5 value: 25.249 - type: mrr_at_1 value: 33.025 - type: mrr_at_10 value: 41.86 - type: mrr_at_100 value: 42.673 - type: mrr_at_1000 value: 42.714 - type: mrr_at_3 value: 39.403 - type: mrr_at_5 value: 40.723 - type: ndcg_at_1 value: 33.025 - type: ndcg_at_10 value: 34.522999999999996 - type: ndcg_at_100 value: 40.831 - type: ndcg_at_1000 value: 44.01 - type: ndcg_at_3 value: 30.698999999999998 - type: ndcg_at_5 value: 31.832 - type: precision_at_1 value: 33.025 - type: precision_at_10 value: 9.583 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.22100000000000003 - type: precision_at_3 value: 20.216 - type: precision_at_5 value: 15.031 - type: recall_at_1 value: 16.786 - type: recall_at_10 value: 41.969 - type: recall_at_100 value: 66.353 - type: recall_at_1000 value: 85.299 - type: recall_at_3 value: 28.111000000000004 - type: recall_at_5 value: 33.645 - task: type: Retrieval dataset: type: mteb/hotpotqa name: MTEB HotpotQA config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 37.346000000000004 - type: map_at_10 value: 56.184999999999995 - type: map_at_100 value: 57.062000000000005 - type: map_at_1000 value: 57.126999999999995 - type: map_at_3 value: 52.815 - type: map_at_5 value: 54.893 - type: mrr_at_1 value: 74.693 - type: mrr_at_10 value: 81.128 - type: mrr_at_100 value: 81.356 - type: mrr_at_1000 value: 81.363 - type: mrr_at_3 value: 80.05600000000001 - type: mrr_at_5 value: 80.74 - type: ndcg_at_1 value: 74.693 - type: ndcg_at_10 value: 65.249 - type: ndcg_at_100 value: 68.357 - type: ndcg_at_1000 value: 69.64200000000001 - type: ndcg_at_3 value: 60.377 - type: ndcg_at_5 value: 63.044 - type: precision_at_1 value: 74.693 - type: precision_at_10 value: 13.630999999999998 - type: precision_at_100 value: 1.606 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 38.222 - type: precision_at_5 value: 25.040000000000003 - type: recall_at_1 value: 37.346000000000004 - type: recall_at_10 value: 68.157 - type: recall_at_100 value: 80.297 - type: recall_at_1000 value: 88.832 - type: recall_at_3 value: 57.333 - type: recall_at_5 value: 62.6 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 62.80240000000001 - type: ap value: 58.22949464075975 - type: f1 value: 62.55694937343487 - task: type: Retrieval dataset: type: mteb/msmarco name: MTEB MSMARCO config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 20.918 - type: map_at_10 value: 32.732 - type: map_at_100 value: 33.922000000000004 - type: map_at_1000 value: 33.976 - type: map_at_3 value: 29.051 - type: map_at_5 value: 31.101 - type: mrr_at_1 value: 21.418 - type: mrr_at_10 value: 33.284000000000006 - type: mrr_at_100 value: 34.426 - type: mrr_at_1000 value: 34.473 - type: mrr_at_3 value: 29.644 - type: mrr_at_5 value: 31.691000000000003 - type: ndcg_at_1 value: 21.418 - type: ndcg_at_10 value: 39.427 - type: ndcg_at_100 value: 45.190999999999995 - type: ndcg_at_1000 value: 46.544000000000004 - type: ndcg_at_3 value: 31.885 - type: ndcg_at_5 value: 35.555 - type: precision_at_1 value: 21.418 - type: precision_at_10 value: 6.254999999999999 - type: precision_at_100 value: 0.915 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 13.591000000000001 - type: precision_at_5 value: 10.011000000000001 - type: recall_at_1 value: 20.918 - type: recall_at_10 value: 60.074000000000005 - type: recall_at_100 value: 86.726 - type: recall_at_1000 value: 97.116 - type: recall_at_3 value: 39.506 - type: recall_at_5 value: 48.319 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.79799361605106 - type: f1 value: 90.0757957511057 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 58.00501595987233 - type: f1 value: 39.85731569133947 - task: type: Classification dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClassification (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 77.10970464135022 - type: f1 value: 76.12037616356896 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringP2P (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 69.81323966287493 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringS2S (eng) config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 33.112774215788455 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.51042367182246 - type: f1 value: 60.99310361578824 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.0053799596503 - type: f1 value: 69.7794673003686 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.56899174856954 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 26.21848014733929 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.256308756916646 - type: mrr value: 31.123872086825656 - task: type: Retrieval dataset: type: mteb/nfcorpus name: MTEB NFCorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 5.07 - type: map_at_10 value: 11.286999999999999 - type: map_at_100 value: 13.630999999999998 - type: map_at_1000 value: 14.844 - type: map_at_3 value: 8.395 - type: map_at_5 value: 9.721 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 51.041000000000004 - type: mrr_at_100 value: 51.661 - type: mrr_at_1000 value: 51.7 - type: mrr_at_3 value: 49.226 - type: mrr_at_5 value: 50.526 - type: ndcg_at_1 value: 39.783 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 27.459 - type: ndcg_at_1000 value: 35.988 - type: ndcg_at_3 value: 36.705 - type: ndcg_at_5 value: 34.156 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 22.415 - type: precision_at_100 value: 6.819999999999999 - type: precision_at_1000 value: 1.8980000000000001 - type: precision_at_3 value: 34.572 - type: precision_at_5 value: 29.287999999999997 - type: recall_at_1 value: 5.07 - type: recall_at_10 value: 14.576 - type: recall_at_100 value: 27.112000000000002 - type: recall_at_1000 value: 57.995 - type: recall_at_3 value: 9.242 - type: recall_at_5 value: 11.668000000000001 - task: type: Retrieval dataset: type: mteb/nq name: MTEB NQ config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 32.263999999999996 - type: map_at_10 value: 47.219 - type: map_at_100 value: 48.209999999999994 - type: map_at_1000 value: 48.24 - type: map_at_3 value: 42.905 - type: map_at_5 value: 45.501000000000005 - type: mrr_at_1 value: 36.153 - type: mrr_at_10 value: 49.636 - type: mrr_at_100 value: 50.357 - type: mrr_at_1000 value: 50.378 - type: mrr_at_3 value: 46.094 - type: mrr_at_5 value: 48.233 - type: ndcg_at_1 value: 36.124 - type: ndcg_at_10 value: 54.764 - type: ndcg_at_100 value: 58.867999999999995 - type: ndcg_at_1000 value: 59.548 - type: ndcg_at_3 value: 46.717999999999996 - type: ndcg_at_5 value: 50.981 - type: precision_at_1 value: 36.124 - type: precision_at_10 value: 8.931000000000001 - type: precision_at_100 value: 1.126 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 21.051000000000002 - type: precision_at_5 value: 15.104000000000001 - type: recall_at_1 value: 32.263999999999996 - type: recall_at_10 value: 75.39099999999999 - type: recall_at_100 value: 93.038 - type: recall_at_1000 value: 98.006 - type: recall_at_3 value: 54.562999999999995 - type: recall_at_5 value: 64.352 - task: type: Classification dataset: type: ag_news name: MTEB NewsClassification config: default split: test revision: eb185aade064a813bc0b7f42de02595523103ca4 metrics: - type: accuracy value: 77.75 - type: f1 value: 77.504243291547 - task: type: PairClassification dataset: type: GEM/opusparcus name: MTEB OpusparcusPC (en) config: en split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.89816700610999 - type: cos_sim_ap value: 100 - type: cos_sim_f1 value: 99.9490575649516 - type: cos_sim_precision value: 100 - type: cos_sim_recall value: 99.89816700610999 - type: dot_accuracy value: 99.89816700610999 - type: dot_ap value: 100 - type: dot_f1 value: 99.9490575649516 - type: dot_precision value: 100 - type: dot_recall value: 99.89816700610999 - type: euclidean_accuracy value: 99.89816700610999 - type: euclidean_ap value: 100 - type: euclidean_f1 value: 99.9490575649516 - type: euclidean_precision value: 100 - type: euclidean_recall value: 99.89816700610999 - type: manhattan_accuracy value: 99.89816700610999 - type: manhattan_ap value: 100 - type: manhattan_f1 value: 99.9490575649516 - type: manhattan_precision value: 100 - type: manhattan_recall value: 99.89816700610999 - type: max_accuracy value: 99.89816700610999 - type: max_ap value: 100 - type: max_f1 value: 99.9490575649516 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (en) config: en split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 61.75000000000001 - type: cos_sim_ap value: 57.9482264289061 - type: cos_sim_f1 value: 62.444061962134256 - type: cos_sim_precision value: 45.3953953953954 - type: cos_sim_recall value: 100 - type: dot_accuracy value: 61.75000000000001 - type: dot_ap value: 57.94808038610475 - type: dot_f1 value: 62.444061962134256 - type: dot_precision value: 45.3953953953954 - type: dot_recall value: 100 - type: euclidean_accuracy value: 61.75000000000001 - type: euclidean_ap value: 57.94808038610475 - type: euclidean_f1 value: 62.444061962134256 - type: euclidean_precision value: 45.3953953953954 - type: euclidean_recall value: 100 - type: manhattan_accuracy value: 61.7 - type: manhattan_ap value: 57.996119308184966 - type: manhattan_f1 value: 62.46078773091669 - type: manhattan_precision value: 45.66768603465851 - type: manhattan_recall value: 98.78721058434398 - type: max_accuracy value: 61.75000000000001 - type: max_ap value: 57.996119308184966 - type: max_f1 value: 62.46078773091669 - task: type: Retrieval dataset: type: mteb/quora name: MTEB QuoraRetrieval config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 69.001 - type: map_at_10 value: 82.573 - type: map_at_100 value: 83.226 - type: map_at_1000 value: 83.246 - type: map_at_3 value: 79.625 - type: map_at_5 value: 81.491 - type: mrr_at_1 value: 79.44 - type: mrr_at_10 value: 85.928 - type: mrr_at_100 value: 86.05199999999999 - type: mrr_at_1000 value: 86.054 - type: mrr_at_3 value: 84.847 - type: mrr_at_5 value: 85.596 - type: ndcg_at_1 value: 79.41 - type: ndcg_at_10 value: 86.568 - type: ndcg_at_100 value: 87.965 - type: ndcg_at_1000 value: 88.134 - type: ndcg_at_3 value: 83.55900000000001 - type: ndcg_at_5 value: 85.244 - type: precision_at_1 value: 79.41 - type: precision_at_10 value: 13.108 - type: precision_at_100 value: 1.509 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.443 - type: precision_at_5 value: 24.03 - type: recall_at_1 value: 69.001 - type: recall_at_10 value: 94.132 - type: recall_at_100 value: 99.043 - type: recall_at_1000 value: 99.878 - type: recall_at_3 value: 85.492 - type: recall_at_5 value: 90.226 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.3161352736264 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 57.83784484156747 - task: type: Retrieval dataset: type: mteb/scidocs name: MTEB SCIDOCS config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.403 - type: map_at_10 value: 10.922 - type: map_at_100 value: 12.626000000000001 - type: map_at_1000 value: 12.883 - type: map_at_3 value: 7.982 - type: map_at_5 value: 9.442 - type: mrr_at_1 value: 21.7 - type: mrr_at_10 value: 31.653 - type: mrr_at_100 value: 32.757999999999996 - type: mrr_at_1000 value: 32.824999999999996 - type: mrr_at_3 value: 28.266999999999996 - type: mrr_at_5 value: 30.127 - type: ndcg_at_1 value: 21.7 - type: ndcg_at_10 value: 18.355 - type: ndcg_at_100 value: 25.228 - type: ndcg_at_1000 value: 30.164 - type: ndcg_at_3 value: 17.549 - type: ndcg_at_5 value: 15.260000000000002 - type: precision_at_1 value: 21.7 - type: precision_at_10 value: 9.47 - type: precision_at_100 value: 1.9290000000000003 - type: precision_at_1000 value: 0.312 - type: precision_at_3 value: 16.3 - type: precision_at_5 value: 13.28 - type: recall_at_1 value: 4.403 - type: recall_at_10 value: 19.18 - type: recall_at_100 value: 39.182 - type: recall_at_1000 value: 63.378 - type: recall_at_3 value: 9.934999999999999 - type: recall_at_5 value: 13.459999999999999 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 76.90841073432534 - type: cos_sim_spearman value: 69.2566375434526 - type: euclidean_pearson value: 73.00183878559413 - type: euclidean_spearman value: 69.25664656235413 - type: manhattan_pearson value: 72.89594756197533 - type: manhattan_spearman value: 69.23247111043545 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 69.60878511794063 - type: cos_sim_spearman value: 65.89916377105551 - type: euclidean_pearson value: 66.90761876557181 - type: euclidean_spearman value: 65.89915018368384 - type: manhattan_pearson value: 66.78502575257721 - type: manhattan_spearman value: 65.79977053467938 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 77.2869334987418 - type: cos_sim_spearman value: 77.86961921643416 - type: euclidean_pearson value: 77.43179820479914 - type: euclidean_spearman value: 77.86961921643416 - type: manhattan_pearson value: 77.18900647348373 - type: manhattan_spearman value: 77.61209060062608 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 76.26453932960364 - type: cos_sim_spearman value: 72.81574657995401 - type: euclidean_pearson value: 75.0708953437423 - type: euclidean_spearman value: 72.81574657995401 - type: manhattan_pearson value: 74.88396609999512 - type: manhattan_spearman value: 72.65437562156805 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 82.37827653919395 - type: cos_sim_spearman value: 83.4885552472602 - type: euclidean_pearson value: 82.89377087926749 - type: euclidean_spearman value: 83.4885552472602 - type: manhattan_pearson value: 82.82440771787735 - type: manhattan_spearman value: 83.41449537888975 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 78.7995043673964 - type: cos_sim_spearman value: 80.57804447517638 - type: euclidean_pearson value: 80.03013884278195 - type: euclidean_spearman value: 80.57804447517638 - type: manhattan_pearson value: 80.13406111544424 - type: manhattan_spearman value: 80.65354602648962 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.63565989937278 - type: cos_sim_spearman value: 84.4948593656943 - type: euclidean_pearson value: 84.68743074820951 - type: euclidean_spearman value: 84.4948593656943 - type: manhattan_pearson value: 84.43639397781811 - type: manhattan_spearman value: 84.32595552115242 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.06382649277246 - type: cos_sim_spearman value: 66.28447782018655 - type: euclidean_pearson value: 67.09895930908392 - type: euclidean_spearman value: 66.28447782018655 - type: manhattan_pearson value: 66.96342453888376 - type: manhattan_spearman value: 66.33876259551842 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 78.43883428940346 - type: cos_sim_spearman value: 79.18395553127085 - type: euclidean_pearson value: 79.22986635457109 - type: euclidean_spearman value: 79.18395553127085 - type: manhattan_pearson value: 79.10921229934691 - type: manhattan_spearman value: 79.02283553930171 - task: type: STS dataset: type: PhilipMay/stsb_multi_mt name: MTEB STSBenchmarkMultilingualSTS (en) config: en split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 78.43883433444418 - type: cos_sim_spearman value: 79.18395553127085 - type: euclidean_pearson value: 79.22986642351681 - type: euclidean_spearman value: 79.18395553127085 - type: manhattan_pearson value: 79.10921236746302 - type: manhattan_spearman value: 79.02283553930171 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 76.9361627171417 - type: mrr value: 93.06577046773126 - task: type: Retrieval dataset: type: mteb/scifact name: MTEB SciFact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 50.693999999999996 - type: map_at_10 value: 59.784000000000006 - type: map_at_100 value: 60.443000000000005 - type: map_at_1000 value: 60.480000000000004 - type: map_at_3 value: 57.028 - type: map_at_5 value: 58.306999999999995 - type: mrr_at_1 value: 53.333 - type: mrr_at_10 value: 61.565000000000005 - type: mrr_at_100 value: 62.095 - type: mrr_at_1000 value: 62.131 - type: mrr_at_3 value: 59.721999999999994 - type: mrr_at_5 value: 60.589000000000006 - type: ndcg_at_1 value: 53.333 - type: ndcg_at_10 value: 64.512 - type: ndcg_at_100 value: 67.366 - type: ndcg_at_1000 value: 68.46799999999999 - type: ndcg_at_3 value: 59.748999999999995 - type: ndcg_at_5 value: 61.526 - type: precision_at_1 value: 53.333 - type: precision_at_10 value: 8.733 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 23.222 - type: precision_at_5 value: 15.2 - type: recall_at_1 value: 50.693999999999996 - type: recall_at_10 value: 77.333 - type: recall_at_100 value: 90.10000000000001 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 64.39399999999999 - type: recall_at_5 value: 68.7 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81386138613861 - type: cos_sim_ap value: 94.96375600031361 - type: cos_sim_f1 value: 90.36885245901641 - type: cos_sim_precision value: 92.64705882352942 - type: cos_sim_recall value: 88.2 - type: dot_accuracy value: 99.81386138613861 - type: dot_ap value: 94.96375600031361 - type: dot_f1 value: 90.36885245901641 - type: dot_precision value: 92.64705882352942 - type: dot_recall value: 88.2 - type: euclidean_accuracy value: 99.81386138613861 - type: euclidean_ap value: 94.96375600031361 - type: euclidean_f1 value: 90.36885245901641 - type: euclidean_precision value: 92.64705882352942 - type: euclidean_recall value: 88.2 - type: manhattan_accuracy value: 99.81287128712871 - type: manhattan_ap value: 94.92563500640084 - type: manhattan_f1 value: 90.27277406073082 - type: manhattan_precision value: 93.00106044538707 - type: manhattan_recall value: 87.7 - type: max_accuracy value: 99.81386138613861 - type: max_ap value: 94.96375600031361 - type: max_f1 value: 90.36885245901641 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.486984956276274 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.58453023612073 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.16317315282306 - type: mrr value: 50.82617137764197 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.2927995133324 - type: cos_sim_spearman value: 30.09648622523191 - type: dot_pearson value: 30.29279853541771 - type: dot_spearman value: 30.09648622523191 - task: type: Retrieval dataset: type: mteb/trec-covid name: MTEB TRECCOVID config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.23500000000000001 - type: map_at_10 value: 2.01 - type: map_at_100 value: 12.064 - type: map_at_1000 value: 27.437 - type: map_at_3 value: 0.6649999999999999 - type: map_at_5 value: 1.0959999999999999 - type: mrr_at_1 value: 88 - type: mrr_at_10 value: 92.667 - type: mrr_at_100 value: 92.667 - type: mrr_at_1000 value: 92.667 - type: mrr_at_3 value: 91.667 - type: mrr_at_5 value: 92.667 - type: ndcg_at_1 value: 84 - type: ndcg_at_10 value: 79.431 - type: ndcg_at_100 value: 60.914 - type: ndcg_at_1000 value: 52.005 - type: ndcg_at_3 value: 82.285 - type: ndcg_at_5 value: 81.565 - type: precision_at_1 value: 88 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 62.32 - type: precision_at_1000 value: 23.014000000000003 - type: precision_at_3 value: 86.667 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.23500000000000001 - type: recall_at_10 value: 2.19 - type: recall_at_100 value: 14.904 - type: recall_at_1000 value: 47.875 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.165 - task: type: Retrieval dataset: type: mteb/touche2020 name: MTEB Touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.639 - type: map_at_10 value: 14.184 - type: map_at_100 value: 20.61 - type: map_at_1000 value: 22.377 - type: map_at_3 value: 9.163 - type: map_at_5 value: 10.773000000000001 - type: mrr_at_1 value: 46.939 - type: mrr_at_10 value: 59.345000000000006 - type: mrr_at_100 value: 60.07599999999999 - type: mrr_at_1000 value: 60.07599999999999 - type: mrr_at_3 value: 55.782 - type: mrr_at_5 value: 58.231 - type: ndcg_at_1 value: 41.837 - type: ndcg_at_10 value: 32.789 - type: ndcg_at_100 value: 42.232 - type: ndcg_at_1000 value: 53.900999999999996 - type: ndcg_at_3 value: 41.963 - type: ndcg_at_5 value: 35.983 - type: precision_at_1 value: 46.939 - type: precision_at_10 value: 28.163 - type: precision_at_100 value: 8.102 - type: precision_at_1000 value: 1.59 - type: precision_at_3 value: 44.897999999999996 - type: precision_at_5 value: 34.694 - type: recall_at_1 value: 3.639 - type: recall_at_10 value: 19.308 - type: recall_at_100 value: 48.992000000000004 - type: recall_at_1000 value: 84.59400000000001 - type: recall_at_3 value: 9.956 - type: recall_at_5 value: 12.33 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 64.305 - type: ap value: 11.330746746072599 - type: f1 value: 49.290704382387865 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.1941143180532 - type: f1 value: 56.40189765095578 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 36.28189332526842 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 83.1912737676581 - type: cos_sim_ap value: 64.31536990146257 - type: cos_sim_f1 value: 61.095167030191696 - type: cos_sim_precision value: 54.074375127006704 - type: cos_sim_recall value: 70.21108179419525 - type: dot_accuracy value: 83.1912737676581 - type: dot_ap value: 64.31539216162541 - type: dot_f1 value: 61.095167030191696 - type: dot_precision value: 54.074375127006704 - type: dot_recall value: 70.21108179419525 - type: euclidean_accuracy value: 83.1912737676581 - type: euclidean_ap value: 64.31538391358727 - type: euclidean_f1 value: 61.095167030191696 - type: euclidean_precision value: 54.074375127006704 - type: euclidean_recall value: 70.21108179419525 - type: manhattan_accuracy value: 83.07206294331525 - type: manhattan_ap value: 64.14646315556838 - type: manhattan_f1 value: 61.194029850746254 - type: manhattan_precision value: 54.166666666666664 - type: manhattan_recall value: 70.31662269129288 - type: max_accuracy value: 83.1912737676581 - type: max_ap value: 64.31539216162541 - type: max_f1 value: 61.194029850746254 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.38242713548337 - type: cos_sim_ap value: 84.70041255196017 - type: cos_sim_f1 value: 77.13222561986515 - type: cos_sim_precision value: 73.95266690215472 - type: cos_sim_recall value: 80.59747459193102 - type: dot_accuracy value: 88.38242713548337 - type: dot_ap value: 84.7004118720222 - type: dot_f1 value: 77.13222561986515 - type: dot_precision value: 73.95266690215472 - type: dot_recall value: 80.59747459193102 - type: euclidean_accuracy value: 88.38242713548337 - type: euclidean_ap value: 84.70041593996575 - type: euclidean_f1 value: 77.13222561986515 - type: euclidean_precision value: 73.95266690215472 - type: euclidean_recall value: 80.59747459193102 - type: manhattan_accuracy value: 88.36108200411378 - type: manhattan_ap value: 84.66897701572054 - type: manhattan_f1 value: 77.00707640360645 - type: manhattan_precision value: 72.17695778062082 - type: manhattan_recall value: 82.53002771789343 - type: max_accuracy value: 88.38242713548337 - type: max_ap value: 84.70041593996575 - type: max_f1 value: 77.13222561986515 - task: type: Clustering dataset: type: jinaai/cities_wiki_clustering name: MTEB WikiCitiesClustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 81.46426354153643 --- <h1 align="center">Snowflake's Arctic-embed-xs</h1> <h4 align="center"> <p> <a href=#news>News</a> | <a href=#models>Models</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#contact">Contact</a> | <a href="#faq">FAQ</a> <a href="#license">License</a> | <a href="#acknowledgement">Acknowledgement</a> <p> </h4> ## News 07/26/2024: Release preprint [[2407.18887] Embedding And Clustering Your Data Can Improve Contrastive Pretraining](https://arxiv.org/abs/2407.18887) on arXiv. 07/18/2024: Release of `snowflake-arctic-embed-m-v1.5`, capable of producing highly compressible embedding vectors that preserve quality even when squished as small as 128 bytes per vector. Details about the development of this model are available in the [launch post on the Snowflake engineering blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/). 05/10/2024: Release the [technical report on Arctic Embed](https://arxiv.org/abs/2405.05374) 04/16/2024: Release the ** snowflake-arctic-embed ** family of text embedding models. The releases are state-of-the-art for Retrieval quality at each of their representative size profiles. [Technical Report]() is coming shortly. For more details, please refer to our Github: [Arctic-Text-Embed](https://github.com/Snowflake-Labs/arctic-embed). ## Models snowflake-arctic-embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance. The `snowflake-arctic-embedding` models achieve **state-of-the-art performance on the MTEB/BEIR leaderboard** for each of their size variants. Evaluation is performed using these [scripts](https://github.com/Snowflake-Labs/snowflake-arctic-embed/tree/main/src). As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models. The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. First, the models are trained with large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data. Following pretraining models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy. A detailed technical report can be found [here](https://arxiv.org/abs/2405.05374). | Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension | | ----------------------------------------------------------------------- | -------------------------------- | --------------------- | ------------------- | | [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | 22 | 384 | | [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | 33 | 384 | | [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | 110 | 768 | | [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | 137 | 768 | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | 335 | 1024 | Aside from being great open-source models, the largest model, [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/), can serve as a natural replacement for closed-source embedding, as shown below. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | | Google-gecko-text-embedding | 55.7 | | text-embedding-3-large | 55.44 | | Cohere-embed-english-v3.0 | 55.00 | | bge-large-en-v1.5 | 54.29 | ### [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs) This tiny model packs quite the punch. Based on the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model with only 22m parameters and 384 dimensions, this model should meet even the strictest latency/TCO budgets. Despite its size, its retrieval accuracy is closer to that of models with 100m paramers. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------- | -------------------------------- | | [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | | GIST-all-MiniLM-L6-v2 | 45.12 | | gte-tiny | 44.92 | | all-MiniLM-L6-v2 | 41.95 | | bge-micro-v2 | 42.56 | ### [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) Based on the [infloat/e5-small-unsupervised](https://huggingface.co/intfloat/e5-small-unsupervised) model, this small model does not trade off retrieval accuracy for its small size. With only 33m parameters and 384 dimensions, this model should easily allow scaling to large datasets. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | | bge-small-en-v1.5 | 51.68 | | Cohere-embed-english-light-v3.0 | 51.34 | | text-embedding-3-small | 51.08 | | e5-small-v2 | 49.04 | ### [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) Based on the [intfloat/e5-base-unsupervised](https://huggingface.co/intfloat/e5-base-unsupervised) model, this medium model is the workhorse that provides the best retrieval performance without slowing down inference. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | | bge-base-en-v1.5 | 53.25 | | nomic-embed-text-v1.5 | 53.25 | | GIST-Embedding-v0 | 52.31 | | gte-base | 52.31 | ### [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) Based on the [nomic-embed-text-v1-unsupervised](https://huggingface.co/nomic-ai/nomic-embed-text-v1-unsupervised) model, this long-context variant of our medium-sized model is perfect for workloads that can be constrained by the regular 512 token context of our other models. Without the use of RPE, this model supports up to 2048 tokens. With RPE, it can scale to 8192! | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | | nomic-embed-text-v1.5 | 53.01 | | nomic-embed-text-v1 | 52.81 | ### [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) Based on the [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) model, this large model is a direct drop-in for closed APIs and delivers the most accurate retrieval experience. | Model Name | MTEB Retrieval Score (NDCG @ 10) | | ------------------------------------------------------------------ | -------------------------------- | | [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | | UAE-Large-V1 | 54.66 | | bge-large-en-v1.5 | 54.29 | | mxbai-embed-large-v1 | 54.39 | | e5-Large-v2 | 50.56 | ## Usage ### Using Sentence Transformers You can use the sentence-transformers package to use an snowflake-arctic-embed model, as shown below. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Snowflake/snowflake-arctic-embed-xs") queries = ['what is snowflake?', 'Where can I get the best tacos?'] documents = ['The Data Cloud!', 'Mexico City of Course!'] query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents) scores = query_embeddings @ document_embeddings.T for query, query_scores in zip(queries, scores): doc_score_pairs = list(zip(documents, query_scores)) doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) # Output passages & scores print("Query:", query) for document, score in doc_score_pairs: print(score, document) ``` ``` Query: what is snowflake? 0.57515126 The Data Cloud! 0.45798576 Mexico City of Course! Query: Where can I get the best tacos? 0.5636022 Mexico City of Course! 0.5044898 The Data Cloud! ``` ### Using Huggingface transformers You can use the transformers package for a snowflake-arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query). ```python import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-xs') model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-xs', add_pooling_layer=False) model.eval() query_prefix = 'Represent this sentence for searching relevant passages: ' queries = ['what is snowflake?', 'Where can I get the best tacos?'] queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries] query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512) documents = ['The Data Cloud!', 'Mexico City of Course!'] document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512) # Compute token embeddings with torch.no_grad(): query_embeddings = model(**query_tokens)[0][:, 0] document_embeddings = model(**document_tokens)[0][:, 0] # normalize embeddings query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1) document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1) scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1)) for query, query_scores in zip(queries, scores): doc_score_pairs = list(zip(documents, query_scores)) doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores print("Query:", query) for document, score in doc_score_pairs: print(score, document) ``` ### Using Transformers.js If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) by running: ```bash npm i @xenova/transformers ``` You can then use the model to compute embeddings as follows: ```js import { pipeline, dot } from '@xenova/transformers'; // Create feature extraction pipeline const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-xs', { quantized: false, // Comment out this line to use the quantized version }); // Generate sentence embeddings const sentences = [ 'Represent this sentence for searching relevant passages: Where can I get the best tacos?', 'The Data Cloud!', 'Mexico City of Course!', ] const output = await extractor(sentences, { normalize: true, pooling: 'cls' }); // Compute similarity scores const [source_embeddings, ...document_embeddings ] = output.tolist(); const similarities = document_embeddings.map(x => dot(source_embeddings, x)); console.log(similarities); // [0.5044895661144148, 0.5636021124426508] ``` ## FAQ TBD ## Contact Feel free to open an issue or pull request if you have any questions or suggestions about this project. You also can email Daniel Campos(daniel.campos@snowflake.com). ## License Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge. ## Acknowledgement We want to thank the open-source community, which has provided the great building blocks upon which we could make our models. We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible. We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work. We also thank the open-source community for producing the great models we could build on top of and making these releases possible. Finally, we thank the researchers who created BEIR and MTEB benchmarks. It is largely thanks to their tireless work to define what better looks like that we could improve model performance.
WinKawaks/vit-tiny-patch16-224
WinKawaks
"2023-03-30T14:56:06Z"
57,051
14
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "vision", "dataset:imagenet", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224). Note that [safetensors] model requires torch 2.0 environment.
naver/splade-v3
naver
"2024-03-12T08:15:13Z"
57,004
24
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "splade", "en", "arxiv:2403.06789", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-03-08T15:40:21Z"
--- license: cc-by-nc-sa-4.0 language: - en tags: - splade --- ## SPLADE-v3 #### SPLADE-v3 is the latest series of SPLADE models. This checkpoint corresponds to a model that starts from SPLADE++SelfDistil (`naver/splade-cocondenser-selfdistil`), and is trained with a mix of KL-Div and MarginMSE, with 8 negatives per query sampled from SPLADE++SelfDistil. We used the original MS MARCO collection **without the titles**. For more details, see our arXiv companion book: https://arxiv.org/abs/2403.06789 To use SPLADE, please visit our GitHub repository: https://github.com/naver/splade ## Performance | | MRR@10 (MS MARCO dev) | avg nDCG@10 (BEIR-13) | | --- | --- | --- | | `naver/splade-v3` | 40.2 | 51.7 | ## Citation If you use our checkpoint, please cite our work: ``` @misc{lassance2024spladev3, title={SPLADE-v3: New baselines for SPLADE}, author={Carlos Lassance and Hervé Déjean and Thibault Formal and Stéphane Clinchant}, year={2024}, eprint={2403.06789}, archivePrefix={arXiv}, primaryClass={cs.IR}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
Dremmar/nsfw-xl
Dremmar
"2024-01-07T11:19:41Z"
56,963
64
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us", "not-for-all-audiences" ]
text-to-image
"2024-01-07T11:18:33Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "UNICODE\0\0a\0n\0a\0l\0o\0g\0 \0f\0i\0l\0m\0 \0p\0h\0o\0t\0o\0 \0w\0o\0m\0a\0n\0,\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0h\0e\0a\0t\0s\0h\0o\0t\0,\0 \0f\0a\0c\0i\0n\0g\0 \0v\0i\0e\0w\0e\0r\0 \0<\0l\0o\0r\0a\0:\0n\0s\0f\0w\0-\0x\0l\0-\02\0.\00\0:\01\0>\0 \0.\0 \0f\0a\0d\0e\0d\0 \0f\0i\0l\0m\0,\0 \0d\0e\0s\0a\0t\0u\0r\0a\0t\0e\0d\0,\0 \03\05\0m\0m\0 \0p\0h\0o\0t\0o\0,\0 \0g\0r\0a\0i\0n\0y\0,\0 \0v\0i\0g\0n\0e\0t\0t\0e\0,\0 \0v\0i\0n\0t\0a\0g\0e\0,\0 \0K\0o\0d\0a\0c\0h\0r\0o\0m\0e\0,\0 \0L\0o\0m\0o\0g\0r\0a\0p\0h\0y\0,\0 \0s\0t\0a\0i\0n\0e\0d\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0f\0o\0u\0n\0d\0 \0f\0o\0o\0t\0a\0g\0e\0" output: url: images/00097-3192725504.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # nsfw-xl <Gallery /> ## Model description just copy of https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;141300&#x2F;nsfw-xl ## Download model Weights for this model are available in Safetensors format. [Download](/Dremmar/nsfw-xl/tree/main) them in the Files & versions tab.
aws-prototyping/MegaBeam-Mistral-7B-512k
aws-prototyping
"2024-08-16T06:44:42Z"
56,934
38
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-07-30T02:00:48Z"
--- license: apache-2.0 inference: false --- # MegaBeam-Mistral-7B-512k Model `MegaBeam-Mistral-7B-512k` is a Large-Context LLM that supports 524,288 tokens in its context. `MegaBeam-Mistral-7B-512k` was trained on [Mistral-7B Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), and can be deployed using various serving frameworks like [vLLM](https://github.com/vllm-project/vllm) and Amazon SageMaker's [DJL](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-models-frameworks-djl-serving.html) endpoint. ## Evaluations We evaluated `MegaBeam-Mistral-7B-512k` on three long-context benchmarks. For each benchmark, we deployed the `MegaBeam-Mistral-7B-512k` model with [vLLM (v0.5.1)](https://github.com/vllm-project/vllm/releases/tag/v0.5.1) on an EC2 instance and obtained LLM responses through the OpenAI API provided by vLLM. **[1. Needle In A Haystack - Pressure Testing LLMs](https://github.com/Arize-ai/LLMTest_NeedleInAHaystack)** The [Arize-ai NIAH](https://github.com/Arize-ai/LLMTest_NeedleInAHaystack) varies the target random number and introduces a random city for each question, requiring the LLM to extract the random number from various selected context locations. `MegaBeam-Mistral-7B-512k` scored `100%` on this NIAH benchmark as shown in this plot. ![NIAH](niah_megabeam-mistral-7b-512k.png) **[2. RULER: What’s the Real Context Size of Your Long-Context Language Models?](https://github.com/hsiehjackson/RULER)** The [RULER](https://github.com/hsiehjackson/RULER) benchmark evaluates long-context language models across four task categories - Retrieval, Multi-hop Tracing, Aggregation, and Question Answering - with a total of 13 tasks. RULER goes beyond simple in-context recall by introducing more complex long-context scenarios. `MegaBeam-Mistral-7B-512k` scored an average of `88.70` across different context lengths as shown in this table (*adapted from the [RULER project](https://github.com/hsiehjackson/RULER)*). | Models | 4K | 8K | 16K | 32K | 64K | 128K | Avg. | |------------------------------|------|------|------|------|------|------|------| | **MegaBeam-Mistral-7B-512k** | 93.3 | 91.8 | 91.5 | 88.9 | 83.7 | 82.8 | 88.7 | | | | | | | | | | | [Gemini-1.5-pro](https://ai.google.dev/gemini-api/docs/models/gemini#:~:text=Gemini-,Gemini%201.5%20Pro%20(Preview%20only),-Text%20and%20images) | 96.7 | 95.8 | 96 | 95.9 | 95.9 | 94.4 | 95.8 | | [GPT-4-1106-preview](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4#:~:text=gpt%2D4%2D1106%2Dpreview,Up%20to%20Apr%202023) | 96.6 | 96.3 | 95.2 | 93.2 | 87 | 81.2 | 91.6 | [Llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) (70B)|96.5|95.8|95.4|94.8|88.4|66.6|89.6| | [Qwen2](https://huggingface.co/Qwen/Qwen2-72B-Instruct) (72B) | 96.9 | 96.1 | 94.9 | 94.1 | 79.8 | 53.7 | 85.9 | | [Command-R-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus) (104B) | 95.6 | 95.2 | 94.2 | 92 | 84.3 | 63.1 | 87.4 | | [GLM4](https://huggingface.co/THUDM/glm-4-9b-chat-1m) (9B) | 94.7 | 92.8 | 92.1 | 89.9 | 86.7 | 83.1 | 89.9 | [Llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) (8B)|95.5|93.8|91.6|87.4|84.7|77.0|88.3| | [Command-R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) (35B) | 93.8 | 93.3 | 92.4 | 89.5 | 84.9 | 76 | 88.3 | | [GradientAI/Llama3](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-1048k) (70B) | 95.1 | 94.4 | 90.8 | 85.4 | 82.9 | 72.1 | 86.5 | | [Mixtral-8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-instruct-v0.1) (39B/141B) | 95.6 | 94.9 | 93.4 | 90.9 | 84.7 | 31.7 | 81.9 | | [Yi](https://huggingface.co/01-ai/Yi-34B-200K) (34B) | 93.3 | 92.2 | 91.3 | 87.5 | 83.2 | 77.3 | 87.5 | | [Phi3-medium](https://huggingface.co/microsoft/Phi-3-medium-128K-instruct) (14B) | 93.3 | 93.2 | 91.1 | 86.8 | 78.6 | 46.1 | 81.5 | | [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-instruct-v0.1) (12.9B/46.7B) | 94.9 | 92.1 | 92.5 | 85.9 | 72.4 | 44.5 | 80.4 | | [GradientAI/Llama3](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) (8B) | 92.8 | 90.3 | 85.7 | 79.9 | 76.3 | 69.5 | 82.4 | | [FILM-7B](https://huggingface.co/In2Training/FILM-7B) (7B) | 92.8 | 88.2 | 88.1 | 86.9 | 70.1 | 27.1 | 75.5 | | [Mistral-7B-instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-instruct-v0.2) (7B) | 93.6 | 91.2 | 87.2 | 75.4 | 49 | 13.8 | 68.4 | [Mistral-Nemo](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|87.8|87.2|87.7|69.0|46.8|19.0|66.2| | [GLM3](https://huggingface.co/THUDM/chatglm3-6b-128K) (6B) | 87.8 | 83.4 | 78.6 | 69.9 | 56 | 42 | 69.6 | | [LWM](https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M) (7B) | 82.3 | 78.4 | 73.7 | 69.1 | 68.1 | 65 | 72.8 | <!-- | Phi3-mini (3.8B) | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | 68.8 | | DBRX (36B/132B) | 95.1 | 93.8 | 83.6 | 63.1 | 2.4 | 0 | 56.3 | | Qwen-1.5 (72B) | 94.9 | 93.8 | 78 | 67.8 | 0 | 0 | 55.7 | | Together (7B) | 88.2 | 81.1 | 69.4 | 63 | 0 | 0 | 50.3 | | LongChat (7B) | 84.7 | 79.9 | 70.8 | 59.3 | 0 | 0 | 49.1 | | LongAlpaca (13B) | 60.6 | 57 | 56.6 | 43.6 | 0 | 0 | 36.3 | --> <br> This table shows how `MegaBeam-Mistral-7B-512k` performed on 13 RULER tasks with increasing context lengths. | Task | Category | 4096 | 8192 | 16384 | 32768 | 65536 | 131072 | |------------------|--------------------|------|-------|-------|-------|-------|--------| | niah_single_1 | Retrieval | 100 | 100 | 100 | 100 | 100 | 100 | | niah_single_2 | Retrieval | 98.6 | 97.8 | 98.8 | 98.2 | 99.4 | 99.6 | | niah_single_3 | Retrieval | 100 | 100 | 100 | 99.8 | 100 | 99.8 | | niah_multikey_1 | Retrieval | 98.8 | 99.6 | 99.2 | 99 | 99.6 | 99.6 | | niah_multikey_2 | Retrieval | 100 | 100 | 100 | 99.8 | 99.4 | 98.6 | | niah_multikey_3 | Retrieval | 99.8 | 99.4 | 99.8 | 100 | 98.6 | 97.8 | | niah_multivalue | Retrieval | 97.1 | 93.8 | 91.85 | 83.5 | 80.3 | 71.45 | | niah_multiquery | Retrieval | 99.95| 99.9 | 99.85 | 99.3 | 99.55 | 99.3 | | vt | Multi-hop Tracing | 99.2 | 97.88 | 96.44 | 96.12 | 91.6 | 89.08 | | cwe | Aggregation | 98.2 | 90.62 | 75.6 | 52.72 | 5.9 | 0.94 | | fwe | Aggregation | 81.47| 80.07 | 95.87 | 96.33 | 83.73 | 96.87 | | qa_1 | Q & A | 85.6 | 82 | 80.6 | 83 | 80.6 | 77.4 | | qa_2 | Q & A | 53.8 | 52 | 51.6 | 48.4 | 49.2 | 45.8 | | average | ALL | 93.3 | 91.8 | 91.5 | 88.9 | 83.7 | 82.8 | | Total Average | 88.7 | | | | | | | **[3. InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens](https://github.com/OpenBMB/InfiniteBench)** [InfiniteBench](https://github.com/OpenBMB/InfiniteBench) developed 12 tasks to evaluate an LLM's capability to process, comprehend, and reason with extended contexts, specifically those with over 100,000 tokens. We combine the InfiniteBench project's evaluation results for SOTA LLMs with `MegaBeam-Mistral-7B-512k`'s result in this table. | Task Name | MegaBeam-Mistral<br>-7B-512k | GPT-4-1106<br>-preview | YaRN-Mistral<br>-7B | Kimi-Chat | Claude 2 | Yi-34B<br>-200K | |----------------|--------------------------|--------------------|-----------------|-----------|-----------|-------------| | PassKey | 100% | 100% | 92.71% | 98.14% | 97.80% | 100.00% | | Retrv.Num | 99.49% | 100% | 56.61% | 95.42% | 98.14% | 100.00% | | Retrv.KV | 24.20% | 89.00% | < 5% | 53.60% | 65.40% | < 5% | | En.Sum | 34.66% | 14.73% | 9.09% | 17.93% | 14.45% | < 5% | | En.QA | 20.32% | 22.22% | 9.55% | 16.52% | 11.97% | 12.17% | | En.MC | 61.57% | 67.25% | 27.95% | 72.49% | 62.88% | 38.43% | | En.Dia | 10.50% | 8.50% | 7.50% | 11.50% | 46.50% | < 5% | | Zh.QA | 19.54% | 25.96% | 14.43% | 17.93% | 9.64% | 13.61% | | Code.Debug | 26.14% | 39.59% | < 5% | 18.02% | < 5% | < 5% | | Code.Run | 2% | 23.25% | < 5% | < 5% | < 5% | < 5% | | Math.Calc | 0% | < 5% | < 5% | < 5% | < 5% | < 5% | | Math.Find | 20% | 60.00% | 17.14% | 12.57% | 32.29% | 25.71% | | Average | 34.87% | 46.08% | 20.41% | 34.93% | 37.21% | 25.41% | ## Example use case This example demonstrates `MegaBeam-Mistral-7B-512k`'s long context capability by processing a large file that includes hundreds of files from a single [Git repository](https://github.com/awslabs/amazon-accessible-rl-sdk). This can be useful for onboarding new developers. ![demo](megabeam_git_demo.gif) ## Serve MegaBeam-Mistral-7B-512k on EC2 instances ## On an AWS `g5.48xlarge` instance, install vLLM as per [vLLM docs](https://vllm.readthedocs.io/en/latest/). ```shell pip install vllm==0.5.1 ``` ### Start the server ```shell VLLM_ENGINE_ITERATION_TIMEOUT_S=3600 python3 -m vllm.entrypoints.openai.api_server \ --model aws-prototyping/MegaBeam-Mistral-7B-512k \ --tensor-parallel-size 8 \ --revision g5-48x ``` **Important Note** - In the repo revision `g5-48x`, `config.json` has been updated to set `max_position_embeddings` to 288,800, fitting the model's KV cache on a single `g5.48xlarge` instance with 8 A10 GPUs (24GB RAM per GPU). On an instance with larger GPU RAM (e.g. `p4d.24xlarge`), simply remove the `revision` argument in order to support the full sequence length of 524,288 tokens: ```shell VLLM_ENGINE_ITERATION_TIMEOUT_S=3600 python3 -m vllm.entrypoints.openai.api_server \ --model aws-prototyping/MegaBeam-Mistral-7B-512k \ --tensor-parallel-size 8 \ ``` ### Run the client ```python from openai import OpenAI # Modify OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" client = OpenAI( # defaults to os.environ.get("OPENAI_API_KEY") api_key=openai_api_key, base_url=openai_api_base, ) models = client.models.list() model = models.data[0].id chat_completion = client.chat.completions.create( messages = [ {"role": "user", "content": "What is your favourite condiment?"}, # insert your long context here {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} # insert your long context here ], model=model, ) print("Chat completion results:") print(chat_completion) ``` ### Deploy the model on a SageMaker Endpoint ### To deploy MegaBeam-Mistral-7B-512k on a SageMaker endpoint, please follow this [SageMaker DJL deployment guide](https://docs.djl.ai/docs/demos/aws/sagemaker/large-model-inference/sample-llm/vllm_deploy_mistral_7b.html). Run the following Python code in a SageMaker notebook (with each block running in a separate cell) ```python import sagemaker from sagemaker import Model, image_uris, serializers, deserializers sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role() %%writefile serving.properties engine=Python option.model_id=aws-prototyping/MegaBeam-Mistral-7B-512k option.revision=g5-48x option.dtype=bf16 option.task=text-generation option.rolling_batch=vllm option.tensor_parallel_degree=8 option.device_map=auto %%sh mkdir mymodel mv serving.properties mymodel/ tar czvf mymodel.tar.gz mymodel/ rm -rf mymodel image_uri = image_uris.retrieve( framework="djl-deepspeed", region=region, version="0.27.0" ) s3_code_prefix = "megaBeam-mistral-7b-512k/code" bucket = sagemaker_session.default_bucket() # bucket to house artifacts code_artifact = sagemaker_session.upload_data("mymodel.tar.gz", bucket, s3_code_prefix) print(f"S3 Code or Model tar ball uploaded to --- &gt; {code_artifact}") model = Model(image_uri=image_uri, model_data=code_artifact, role=role) instance_type = "ml.g5.48xlarge" endpoint_name = sagemaker.utils.name_from_base("megaBeam-mistral-7b-512k") model.deploy(initial_instance_count=1, instance_type=instance_type, endpoint_name=endpoint_name ) # our requests and responses will be in json format so we specify the serializer and the deserializer predictor = sagemaker.Predictor( endpoint_name=endpoint_name, sagemaker_session=sagemaker_session, serializer=serializers.JSONSerializer(), ) # test the endpoint input_str = """<s>[INST] What is your favourite condiment? [/INST] Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " [INST] Do you have mayonnaise recipes? [/INST]""" predictor.predict( {"inputs": input_str, "parameters": {"max_new_tokens": 75}} ) ``` ### Invoke the model on a SageMaker Endpoint ### To use MegaBeam-Mistral-7B-512k on a SageMaker endpoint, please try following this example: ```python import boto3 import json def call_endpoint(text:str, endpoint_name:str): client = boto3.client("sagemaker-runtime") parameters = { "max_new_tokens": 450, "do_sample": True, "temperature": 0.7, } payload = {"inputs": text, "parameters": parameters} response = client.invoke_endpoint( EndpointName=endpoint_name, Body=json.dumps(payload), ContentType="application/json" ) output = json.loads(response["Body"].read().decode()) result = output["generated_text"] return result # please insert your long prompt/document content here prompt = """<s>[INST] What are the main challenges to support long contexts for a Large Language Model? [/INST]""" #print(prompt) endpoint_name = "megaBeam-mistral-7b-512k-2024-05-13-14-23-41-219" # please use a valid endpoint name result = call_endpoint(prompt, endpoint_name) print(result) ``` ## Limitations ## Before using the MegaBeam-Mistral-7B-512k model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content. ## The AWS Contributors ## Chen Wu, Yin Song, Eden Duthie
daspartho/prompt-extend
daspartho
"2022-12-20T19:40:38Z"
56,793
34
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-11-01T06:35:53Z"
--- license: mit tags: - generated_from_trainer model-index: - name: prompt-extend results: [] --- [![Generic badge](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/daspartho/prompt-extend) # Prompt Extend Text generation model for generating suitable style cues given the main idea for a prompt. It is a GPT-2 model trained on [dataset](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts) of stable diffusion prompts. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7436 | 1.0 | 12796 | 2.5429 | | 2.3292 | 2.0 | 25592 | 2.0711 | | 1.9439 | 3.0 | 38388 | 1.8447 | | 1.7059 | 4.0 | 51184 | 1.7325 | | 1.5775 | 5.0 | 63980 | 1.7110 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
UnfilteredAI/NSFW-gen-v2
UnfilteredAI
"2024-08-05T08:41:20Z"
56,773
190
diffusers
[ "diffusers", "safetensors", "UnfilteredAI", "3d", "text-to-image", "not-for-all-audiences", "en", "pt", "th", "base_model:OEvortex/PixelGen", "base_model:finetune:OEvortex/PixelGen", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-04-15T08:16:46Z"
--- base_model: OEvortex/PixelGen license: other language: - en - pt - th library_name: diffusers pipeline_tag: text-to-image tags: - UnfilteredAI - 3d - text-to-image - not-for-all-audiences --- **Model Name:** NSFW-gen-v2 **ANIME version** [Here](https://huggingface.co/UnfilteredAI/NSFW-GEN-ANIME) **Type:** Text-to-Image Generator <a href="https://www.buymeacoffee.com/oevortex" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> **Description:** NSFW-gen is a text-to-image generator developed by UnfilteredAI. This model is designed to generate all kinds of images, including explicit and NSFW (Not Safe For Work) images from textual inputs. **Features:** - **Uncensored Output:** The model produces uncensored and potentially explicit images based on textual inputs. - **Tensor Type:** Operates with FP16 tensor type for optimized performance and efficiency. - **Model Size:** With 3.47 billion parameters, the model offers a vast capacity for learning and generating diverse imagery. - **3D Style Rendering:** The model now includes 3D style/image rendering capability to generate more realistic images. (Use 3d, 3d style in your prompt) **Usage Guidelines:** - **Responsible Use:** Exercise discretion and responsibility when generating content with this model. - **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction.
VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
VoVanPhuc
"2024-04-10T09:01:08Z"
56,729
19
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "sentence-similarity", "vi", "arxiv:2104.08821", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - vi pipeline_tag: sentence-similarity --- #### Table of contents 1. [Introduction](#introduction) 2. [Pretrain model](#models) 3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers) - [Installation](#install1) - [Example usage](#usage1) 4. [Using SimeCSE_Vietnamese with `transformers`](#transformers) - [Installation](#install2) - [Example usage](#usage2) # <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese : - SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance. - SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) - SimeCSE_Vietnamese works with both unlabeled and labeled data. ## Pre-trained models <a name="models"></a> Model | #params | Arch. ---|---|--- [`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base [`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers` ### Installation <a name="install1"></a> - Install `sentence-transformers`: - `pip install -U sentence-transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage1"></a> ```python from sentence_transformers import SentenceTransformer from pyvi.ViTokenizer import tokenize model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base') sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] embeddings = model.encode(sentences) ``` ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers` ### Installation <a name="install2"></a> - Install `transformers`: - `pip install -U transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage2"></a> ```python import torch from transformers import AutoModel, AutoTokenizer from pyvi.ViTokenizer import tokenize PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output ``` ## Quick Start [Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing) ## Citation @article{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, journal={arXiv preprint arXiv:2104.08821}, year={2021} } @inproceedings{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020}, year = {2020}, pages = {1037--1042} }
stabilityai/stable-diffusion-2-base
stabilityai
"2023-07-05T16:19:03Z"
56,701
339
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-11-23T17:41:31Z"
--- license: openrail++ tags: - stable-diffusion - text-to-image --- # Stable Diffusion v2-base Model Card This model card focuses on the model associated with the Stable Diffusion v2-base model, available [here](https://github.com/Stability-AI/stablediffusion). The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`. ![image](https://github.com/Stability-AI/stablediffusion/blob/main/assets/stable-samples/txt2img/merged-0003.png?raw=true) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-base-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt). - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-base#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default PNDM/PLMS scheduler, in this example we are swapping it to EulerDiscreteScheduler): ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler import torch model_id = "stabilityai/stable-diffusion-2-base" # Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
microsoft/Multilingual-MiniLM-L12-H384
microsoft
"2022-08-10T07:27:42Z"
56,677
69
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "multilingual", "en", "ar", "bg", "de", "el", "es", "fr", "hi", "ru", "sw", "th", "tr", "ur", "vi", "zh", "arxiv:2002.10957", "arxiv:1809.05053", "arxiv:1911.02116", "arxiv:1910.07475", "license:mit", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - multilingual - en - ar - bg - de - el - es - fr - hi - ru - sw - th - tr - ur - vi - zh thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- ## MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation MiniLM is a distilled model from the paper "[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://arxiv.org/abs/2002.10957)". Please find the information about preprocessing, training and full details of the MiniLM in the [original MiniLM repository](https://github.com/microsoft/unilm/blob/master/minilm/). Please note: This checkpoint uses `BertModel` with `XLMRobertaTokenizer` so `AutoTokenizer` won't work with this checkpoint! ### Multilingual Pretrained Model - Multilingual-MiniLMv1-L12-H384: 12-layer, 384-hidden, 12-heads, 21M Transformer parameters, 96M embedding parameters Multilingual MiniLM uses the same tokenizer as XLM-R. But the Transformer architecture of our model is the same as BERT. We provide the fine-tuning code on XNLI based on [huggingface/transformers](https://github.com/huggingface/transformers). Please replace `run_xnli.py` in transformers with [ours](https://github.com/microsoft/unilm/blob/master/minilm/examples/run_xnli.py) to fine-tune multilingual MiniLM. We evaluate the multilingual MiniLM on cross-lingual natural language inference benchmark (XNLI) and cross-lingual question answering benchmark (MLQA). #### Cross-Lingual Natural Language Inference - [XNLI](https://arxiv.org/abs/1809.05053) We evaluate our model on cross-lingual transfer from English to other languages. Following [Conneau et al. (2019)](https://arxiv.org/abs/1911.02116), we select the best single model on the joint dev set of all the languages. | Model | #Layers | #Hidden | #Transformer Parameters | Average | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | |---------------------------------------------------------------------------------------------|---------|---------|-------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | [mBERT](https://github.com/google-research/bert) | 12 | 768 | 85M | 66.3 | 82.1 | 73.8 | 74.3 | 71.1 | 66.4 | 68.9 | 69.0 | 61.6 | 64.9 | 69.5 | 55.8 | 69.3 | 60.0 | 50.4 | 58.0 | | [XLM-100](https://github.com/facebookresearch/XLM#pretrained-cross-lingual-language-models) | 16 | 1280 | 315M | 70.7 | 83.2 | 76.7 | 77.7 | 74.0 | 72.7 | 74.1 | 72.7 | 68.7 | 68.6 | 72.9 | 68.9 | 72.5 | 65.6 | 58.2 | 62.4 | | [XLM-R Base](https://arxiv.org/abs/1911.02116) | 12 | 768 | 85M | 74.5 | 84.6 | 78.4 | 78.9 | 76.8 | 75.9 | 77.3 | 75.4 | 73.2 | 71.5 | 75.4 | 72.5 | 74.9 | 71.1 | 65.2 | 66.5 | | **mMiniLM-L12xH384** | 12 | 384 | 21M | 71.1 | 81.5 | 74.8 | 75.7 | 72.9 | 73.0 | 74.5 | 71.3 | 69.7 | 68.8 | 72.1 | 67.8 | 70.0 | 66.2 | 63.3 | 64.2 | This example code fine-tunes **12**-layer multilingual MiniLM on XNLI. ```bash # run fine-tuning on XNLI DATA_DIR=/{path_of_data}/ OUTPUT_DIR=/{path_of_fine-tuned_model}/ MODEL_PATH=/{path_of_pre-trained_model}/ python ./examples/run_xnli.py --model_type minilm \ --output_dir ${OUTPUT_DIR} --data_dir ${DATA_DIR} \ --model_name_or_path microsoft/Multilingual-MiniLM-L12-H384 \ --tokenizer_name xlm-roberta-base \ --config_name ${MODEL_PATH}/multilingual-minilm-l12-h384-config.json \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 128 \ --learning_rate 5e-5 \ --num_train_epochs 5 \ --per_gpu_eval_batch_size 32 \ --weight_decay 0.001 \ --warmup_steps 500 \ --save_steps 1500 \ --logging_steps 1500 \ --eval_all_checkpoints \ --language en \ --fp16 \ --fp16_opt_level O2 ``` #### Cross-Lingual Question Answering - [MLQA](https://arxiv.org/abs/1910.07475) Following [Lewis et al. (2019b)](https://arxiv.org/abs/1910.07475), we adopt SQuAD 1.1 as training data and use MLQA English development data for early stopping. | Model F1 Score | #Layers | #Hidden | #Transformer Parameters | Average | en | es | de | ar | hi | vi | zh | |--------------------------------------------------------------------------------------------|---------|---------|-------------------------|---------|------|------|------|------|------|------|------| | [mBERT](https://github.com/google-research/bert) | 12 | 768 | 85M | 57.7 | 77.7 | 64.3 | 57.9 | 45.7 | 43.8 | 57.1 | 57.5 | | [XLM-15](https://github.com/facebookresearch/XLM#pretrained-cross-lingual-language-models) | 12 | 1024 | 151M | 61.6 | 74.9 | 68.0 | 62.2 | 54.8 | 48.8 | 61.4 | 61.1 | | [XLM-R Base](https://arxiv.org/abs/1911.02116) (Reported) | 12 | 768 | 85M | 62.9 | 77.8 | 67.2 | 60.8 | 53.0 | 57.9 | 63.1 | 60.2 | | [XLM-R Base](https://arxiv.org/abs/1911.02116) (Our fine-tuned) | 12 | 768 | 85M | 64.9 | 80.3 | 67.0 | 62.7 | 55.0 | 60.4 | 66.5 | 62.3 | | **mMiniLM-L12xH384** | 12 | 384 | 21M | 63.2 | 79.4 | 66.1 | 61.2 | 54.9 | 58.5 | 63.1 | 59.0 | ### Citation If you find MiniLM useful in your research, please cite the following paper: ``` latex @misc{wang2020minilm, title={MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers}, author={Wenhui Wang and Furu Wei and Li Dong and Hangbo Bao and Nan Yang and Ming Zhou}, year={2020}, eprint={2002.10957}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Rakib/roberta-base-on-cuad
Rakib
"2023-01-18T12:18:53Z"
56,529
6
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "legal-contract-review", "cuad", "en", "dataset:cuad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:04Z"
--- language: - en license: mit datasets: - cuad pipeline_tag: question-answering tags: - legal-contract-review - roberta - cuad library_name: transformers --- # Model Card for roberta-base-on-cuad # Model Details ## Model Description - **Developed by:** Mohammed Rakib - **Shared by [Optional]:** More information needed - **Model type:** Question Answering - **Language(s) (NLP):** en - **License:** MIT - **Related Models:** - **Parent Model:** RoBERTa - **Resources for more information:** - GitHub Repo: [defactolaw](https://github.com/afra-tech/defactolaw) - Associated Paper: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/) # Uses ## Direct Use This model can be used for the task of Question Answering on Legal Documents. # Training Details Read: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/) for detailed information on training procedure, dataset preprocessing and evaluation. ## Training Data See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information. ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware Used V100/P100 from Google Colab Pro ### Software Python, Transformers # Citation **BibTeX:** ``` @inproceedings{nawar-etal-2022-open, title = "An Open Source Contractual Language Understanding Application Using Machine Learning", author = "Nawar, Afra and Rakib, Mohammed and Hai, Salma Abdul and Haq, Sanaulla", booktitle = "Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lateraisse-1.6", pages = "42--50", abstract = "Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it{'}s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.", } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("Rakib/roberta-base-on-cuad") model = AutoModelForQuestionAnswering.from_pretrained("Rakib/roberta-base-on-cuad") ``` </details>
yisol/IDM-VTON
yisol
"2024-04-22T19:53:20Z"
56,466
377
diffusers
[ "diffusers", "onnx", "safetensors", "stable-diffusion-xl", "inpainting", "virtual try-on", "arxiv:2403.05139", "license:cc-by-nc-sa-4.0", "diffusers:StableDiffusionXLInpaintPipeline", "region:us" ]
image-to-image
"2024-03-28T20:42:50Z"
--- base_model: stable-diffusion-xl-1.0-inpainting-0.1 tags: - stable-diffusion-xl - inpainting - virtual try-on license: cc-by-nc-sa-4.0 --- # Check out more codes on our [github repository](https://github.com/yisol/IDM-VTON)! # IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild This is an official implementation of paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild' - [paper](https://arxiv.org/abs/2403.05139) - [project page](https://idm-vton.github.io/) 🤗 Try our huggingface [Demo](https://huggingface.co/spaces/yisol/IDM-VTON) ![teaser](assets/teaser.png)&nbsp; ![teaser2](assets/teaser2.png)&nbsp; ## TODO LIST - [x] demo model - [x] inference code - [ ] training code ## Acknowledgements For the demo, GPUs are supported from [zerogpu](https://huggingface.co/zero-gpu-explorers), and auto masking generation codes are based on [OOTDiffusion](https://github.com/levihsu/OOTDiffusion) and [DCI-VTON](https://github.com/bcmi/DCI-VTON-Virtual-Try-On). Parts of the code are based on [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter). ## Citation ``` @article{choi2024improving, title={Improving Diffusion Models for Virtual Try-on}, author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo}, journal={arXiv preprint arXiv:2403.05139}, year={2024} } ``` ## License The codes and checkpoints in this repository are under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
vblagoje/bert-english-uncased-finetuned-pos
vblagoje
"2021-05-20T08:51:26Z"
56,250
36
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
Entry not found
smeby/Qwen-Qwen1.5-7B-1724632776
smeby
"2024-08-26T00:39:52Z"
56,138
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
"2024-08-26T00:39:36Z"
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
liuhaotian/llava-v1.6-34b
liuhaotian
"2024-05-09T20:09:45Z"
56,102
316
LLaVA
[ "LLaVA", "safetensors", "llava", "image-text-to-text", "license:apache-2.0", "region:us" ]
image-text-to-text
"2024-01-31T03:05:58Z"
--- inference: false license: apache-2.0 library_name: LLaVA pipeline_tag: image-text-to-text tags: - llava --- <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) **Model date:** LLaVA-v1.6-34B was trained in December 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License [NousResearch/Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) license. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
google/switch-base-128
google
"2023-01-24T17:20:02Z"
56,060
5
transformers
[ "transformers", "pytorch", "switch_transformers", "text2text-generation", "en", "dataset:c4", "arxiv:2101.03961", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-11-04T07:59:22Z"
--- language: - en tags: - text2text-generation widget: - text: "The <extra_id_0> walks in <extra_id_1> park" example_title: "Masked Language Modeling" datasets: - c4 license: apache-2.0 --- # Model Card for Switch Transformers Base - 128 experts ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666966931908-62441d1d9fdefb55a0b7d12c.png) # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) # TL;DR Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks. As mentioned in the first few lines of the abstract : > we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf). # Model Details ## Model Description - **Model type:** Language model - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** [All Switch Transformers Checkpoints](https://huggingface.co/models?search=switch) - **Original Checkpoints:** [All Original Switch Transformers Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2101.03961.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers) # Usage Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing) Find below some example scripts on how to use the model in `transformers`: ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto", torch_dtype=torch.float16) input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> #### INT8 <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details> # Uses ## Direct Use and Downstream Use See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details. ## Out-of-Scope Use More information needed. # Bias, Risks, and Limitations More information needed. ## Ethical considerations and risks More information needed. ## Known Limitations More information needed. ## Sensitive Use: More information needed. # Training Details ## Training Data The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`. ## Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax). # Evaluation ## Testing Data, Factors & Metrics The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1666967660372-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf). ## Results For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5. # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2101.03961, doi = {10.48550/ARXIV.2101.03961}, url = {https://arxiv.org/abs/2101.03961}, author = {Fedus, William and Zoph, Barret and Shazeer, Noam}, keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity}, publisher = {arXiv}, year = {2021}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
AutonLab/MOMENT-1-large
AutonLab
"2024-08-15T15:11:21Z"
55,817
44
transformers
[ "transformers", "pytorch", "safetensors", "time series", "forecasting", "classification", "anomaly detection", "imputation", "pretrained models", "foundation models", "time-series", "time-series-forecasting", "dataset:AutonLab/Timeseries-PILE", "arxiv:2402.03885", "license:mit", "endpoints_compatible", "region:us" ]
time-series-forecasting
"2024-05-09T15:51:06Z"
--- license: mit datasets: - AutonLab/Timeseries-PILE metrics: - accuracy - mse - mae - f1 tags: - time series - forecasting - classification - anomaly detection - imputation - transformers - pretrained models - foundation models - time-series pipeline_tag: time-series-forecasting --- # MOMENT-Large MOMENT is a family of foundation models for general-purpose time-series analysis. The models in this family (1) serve as a building block for diverse **time-series analysis tasks** (e.g., forecasting, classification, anomaly detection, and imputation, etc.), (2) are effective **out-of-the-box**, i.e., with no (or few) task-specific exemplars (enabling e.g., zero-shot forecasting, few-shot classification, etc.), and (3) are **tunable** using in-distribution and task-specific data to improve performance. For details on MOMENT models, training data, and experimental results, please refer to the paper [MOMENT: A Family of Open Time-series Foundation Models](https://arxiv.org/pdf/2402.03885.pdf). # Usage **Recommended Python Version:** Python 3.11 (support for additional versions is expected soon). You can install the `momentfm` package using pip: ```bash pip install momentfm ``` Alternatively, to install the latest version directly from the GitHub repository: ```bash pip install git+https://github.com/moment-timeseries-foundation-model/moment.git ``` To load the pre-trained model for one of the tasks, use one of the following code snippets: **Forecasting** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={ 'task_name': 'forecasting', 'forecast_horizon': 96 }, ) model.init() ``` **Classification** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={ 'task_name': 'classification', 'n_channels': 1, 'num_class': 2 }, ) model.init() ``` **Anomaly Detection, Imputation, and Pre-training** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={"task_name": "reconstruction"}, ) mode.init() ``` **Representation Learning** ```python from moment import MOMENTPipeline model = MOMENTPipeline.from_pretrained( "AutonLab/MOMENT-1-large", model_kwargs={'task_name': 'embedding'}, ) ``` ### Tutorials Here is the list of tutorials and reproducibile experiments to get started with MOMENT for various tasks: - [Forecasting](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/forecasting.ipynb) - [Classification](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/classification.ipynb) - [Anomaly Detection](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/anomaly_detection.ipynb) - [Imputation](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/imputation.ipynb) - [Representation Learning](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/representation_learning.ipynb) - [Real-world Electrocardiogram (ECG) Case Study](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/ptbxl_classification.ipynb) -- This tutorial also shows how to fine-tune MOMENT for a real-world ECG classification problem, performing training and inference on multiple GPUs and parameter efficient fine-tuning (PEFT). ## Model Details ### Model Description - **Developed by:** [Auton Lab](https://autonlab.org/), [Carnegie Mellon University](https://www.cmu.edu/) and [University of Pennsylvania](https://www.upenn.edu/) - **Model type:** Time-series Foundation Model - **License:** MIT License ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/moment-timeseries-foundation-model/ (Pre-training and research code coming out soon!) - **Paper:** https://arxiv.org/abs/2402.03885 - **Demo:** https://github.com/moment-timeseries-foundation-model/moment/tree/main/tutorials ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> We train multiple models over many days resulting in significant energy usage and a sizeable carbon footprint. However, we hope that releasing our models will ensure that future time-series modeling efforts are quicker and more efficient, resulting in lower carbon emissions. We use the Total Graphics Power (TGP) to calculate the total power consumed for training MOMENT models, although the total power consumed by the GPU will likely vary a little based on the GPU utilization while training our model. Our calculations do not account for power demands from other sources of our compute. We use 336.566 Kg C02/MWH as the standard value of CO2 emission per megawatt hour of energy consumed for [Pittsburgh](https://emissionsindex.org/). - **Hardware Type:** NVIDIA RTX A6000 GPU - **GPU Hours:** 404 - **Compute Region:** Pittsburgh, USA - **Carbon Emission (tCO2eq):** #### Hardware All models were trained and evaluated on a computing cluster consisting of 128 AMD EPYC 7502 CPUs, 503 GB of RAM, and 8 NVIDIA RTX A6000 GPUs each with 49 GiB RAM. All MOMENT variants were trained on a single A6000 GPU (with any data or model parallelism). ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** If you use MOMENT please cite our paper: ```bibtex @inproceedings{goswami2024moment, title={MOMENT: A Family of Open Time-series Foundation Models}, author={Mononito Goswami and Konrad Szafer and Arjun Choudhry and Yifu Cai and Shuo Li and Artur Dubrawski}, booktitle={International Conference on Machine Learning}, year={2024} } ``` **APA:** Goswami, M., Szafer, K., Choudhry, A., Cai, Y., Li, S., & Dubrawski, A. (2024). MOMENT: A Family of Open Time-series Foundation Models. In International Conference on Machine Learning. PMLR.
Yntec/RealLife
Yntec
"2024-01-03T18:07:52Z"
55,423
9
diffusers
[ "diffusers", "safetensors", "Photorealistic", "Base model", "Abstract", "Fusch", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-03T16:54:45Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Photorealistic - Base model - Abstract - Fusch - stable-diffusion - stable-diffusion-diffusers - diffusers --- # Real Life v2 Original page: https://civitai.com/models/171814?modelVersionId=219513 Samples and prompts: ![Real Life AI image generator samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/l1qmb1wUZi5NZezrrFZ8P.png) (Click for larger) Top left: 1985 movie sreenshoot Santa Claus with wife and little daughter enjoying pancake with honey. sitting with a pretty cute girl, Art Christmas Theme by Gil_Elvgren and Haddon_Sundblom Top right: a painting of a white panther cub by Bnhr, nature, grass, tree, outdoors, forest, animal focus, yellow eyes, Bottom left: view from diagonally above, central adjustment, skinny young northern european female, long reddish blonde hair, real hair movement, elongated head, beautiful face, grey eyes, thin bowed eyebrows, snub nose, gentle lower jaw line, narrow chin, da vinci lips, slightly smiling with parted lips, curious friendly facial expression, small, slim narrow tapered hips Bottom right: kodachrome camera transparency, dramatic lighting film grain, PARTY HARD BACKGROUND, pretty cute little girl in Zone 51, Extraterrestrial, Alien Space Ship Delivering Christmas Presents, Alien Space Ship Decorated With Garlands and Christmas Balls, Snowstorm ![Real Life Free Text to image generator](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/-uofDpD2UUWdNxhjYmOG5.png)
blanchefort/rubert-base-cased-sentiment
blanchefort
"2023-04-06T04:06:36Z"
55,265
12
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - ru tags: - sentiment - text-classification --- # RuBERT for Sentiment Analysis Short Russian texts sentiment classification This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on aggregated corpus of 351.797 texts. ## Labels 0: NEUTRAL 1: POSITIVE 2: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Datasets used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора //Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116. **[RuReviews](https://github.com/sismetanin/rureviews)** > RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian. **[RuSentiment](http://text-machine.cs.uml.edu/projects/rusentiment/)** > A. Rogers A. Romanov A. Rumshisky S. Volkova M. Gronas A. Gribov RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian. Proceedings of COLING 2018. **[Отзывы о медучреждениях](https://github.com/blanchefort/datasets/tree/master/medical_comments)** > Датасет содержит пользовательские отзывы о медицинских учреждениях. Датасет собран в мае 2019 года с сайта prodoctorov.ru
microsoft/Florence-2-large-ft
microsoft
"2024-07-20T00:12:52Z"
55,246
283
transformers
[ "transformers", "pytorch", "florence2", "text-generation", "vision", "image-text-to-text", "custom_code", "arxiv:2311.06242", "license:mit", "autotrain_compatible", "region:us" ]
image-text-to-text
"2024-06-15T00:57:45Z"
--- license: mit license_link: https://huggingface.co/microsoft/Florence-2-large-ft/resolve/main/LICENSE pipeline_tag: image-text-to-text tags: - vision --- # Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks ## Model Summary This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft. Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model. Resources and Technical Documentation: + [Florence-2 technical report](https://arxiv.org/abs/2311.06242). + [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) | Model | Model size | Model Description | | ------- | ------------- | ------------- | | Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B | Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B | Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks | Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks ## How to Get Started with the Model Use the code below to get started with the model. All models are trained with float16. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True) prompt = "<OD>" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, do_sample=False, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height)) print(parsed_answer) ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device) processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) def run_example(task_prompt, text_input=None): if text_input is None: prompt = task_prompt else: prompt = task_prompt + text_input inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype) generated_ids = model.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3 ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0] parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height)) print(parsed_answer) ``` </details> Here are the tasks `Florence-2` could perform: <details> <summary> Click to expand </summary> ### Caption ```python prompt = "<CAPTION>" run_example(prompt) ``` ### Detailed Caption ```python prompt = "<DETAILED_CAPTION>" run_example(prompt) ``` ### More Detailed Caption ```python prompt = "<MORE_DETAILED_CAPTION>" run_example(prompt) ``` ### Caption to Phrase Grounding caption to phrase grounding task requires additional text input, i.e. caption. Caption to phrase grounding results format: {'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>" results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.") ``` ### Object Detection OD results format: {'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<OD>" run_example(prompt) ``` ### Dense Region Caption Dense region caption results format: {'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['label1', 'label2', ...]} } ```python prompt = "<DENSE_REGION_CAPTION>" run_example(prompt) ``` ### Region proposal Dense region caption results format: {'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}} ```python prompt = "<REGION_PROPOSAL>" run_example(prompt) ``` ### OCR ```python prompt = "<OCR>" run_example(prompt) ``` ### OCR with Region OCR with region output format: {'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}} ```python prompt = "<OCR_WITH_REGION>" run_example(prompt) ``` for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb) </details> # Benchmarks ## Florence-2 Zero-shot performance The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase. | Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP | |--------|---------|----------------------|------------------|--------------------|-----------------------| | Flamingo | 80B | 84.3 | - | - | - | | Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 | | Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 | The following table continues the comparison with performance on other vision-language evaluation tasks. | Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU | |--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------| | Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - | | Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 | | Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 | ## Florence-2 finetuned performance We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks. The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input. | Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc | |----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------| | **Specialist Models** | | | | | | | | | CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - | | BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - | | GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 | | Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 | | PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ | | PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ | | **Generalist Models** | | | | | | | | | Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 | | Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 | | Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 | | Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU | |----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------| | **Specialist Models** | | | | | | | | | | | | | | SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - | | PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 | | UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - | | Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - | | **Generalist Models** | | | | | | | | | | | | | | UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - | | Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 | | Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 | ## BibTex and citation info ``` @article{xiao2023florence, title={Florence-2: Advancing a unified representation for a variety of vision tasks}, author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu}, journal={arXiv preprint arXiv:2311.06242}, year={2023} } ```
TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF
TheBloke
"2023-12-14T14:30:43Z"
55,174
594
transformers
[ "transformers", "gguf", "mixtral", "fr", "it", "de", "es", "en", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2023-12-11T18:08:33Z"
--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 inference: false language: - fr - it - de - es - en license: apache-2.0 model_creator: Mistral AI_ model_name: Mixtral 8X7B Instruct v0.1 model_type: mixtral prompt_template: '[INST] {prompt} [/INST] ' quantized_by: TheBloke widget: - output: text: 'Arr, shiver me timbers! Ye have a llama on yer lawn, ye say? Well, that be a new one for me! Here''s what I''d suggest, arr: 1. Firstly, ensure yer safety. Llamas may look gentle, but they can be protective if they feel threatened. 2. Try to make the area less appealing to the llama. Remove any food sources or water that might be attracting it. 3. Contact local animal control or a wildlife rescue organization. They be the experts and can provide humane ways to remove the llama from yer property. 4. If ye have any experience with animals, you could try to gently herd the llama towards a nearby field or open space. But be careful, arr! Remember, arr, it be important to treat the llama with respect and care. It be a creature just trying to survive, like the rest of us.' text: '[INST] You are a pirate chatbot who always responds with Arr and pirate speak! There''s a llama on my lawn, how can I get rid of him? [/INST]' --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mixtral 8X7B Instruct v0.1 - GGUF - Model creator: [Mistral AI_](https://huggingface.co/mistralai) - Original model: [Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ### Mixtral GGUF Support for Mixtral was merged into Llama.cpp on December 13th. These Mixtral GGUFs are known to work in: * llama.cpp as of December 13th * KoboldCpp 1.52 as later * LM Studio 0.2.9 and later * llama-cpp-python 0.2.23 and later Other clients/libraries, not listed above, may not yet work. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF) * [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` [INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mixtral-8x7b-instruct-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes | | [mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss | | [mixtral-8x7b-instruct-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended | | [mixtral-8x7b-instruct-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended | | [mixtral-8x7b-instruct-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [mixtral-8x7b-instruct-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first. Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf", # Download the model file first n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Mistral AI_'s Mixtral 8X7B Instruct v0.1 # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True) text = "Hello my name is" + inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. <!-- original-model-card end -->
cointegrated/rubert-tiny-sentiment-balanced
cointegrated
"2023-03-20T09:53:10Z"
55,124
13
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "russian", "classification", "sentiment", "multiclass", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: ["ru"] tags: - russian - classification - sentiment - multiclass widget: - text: "Какая гадость эта ваша заливная рыба!" --- This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of sentiment for short Russian texts. The problem is formulated as multiclass classification: `negative` vs `neutral` vs `positive`. ## Usage The function below estimates the sentiment of the given text: ```python # !pip install transformers sentencepiece --quiet import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_checkpoint = 'cointegrated/rubert-tiny-sentiment-balanced' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) if torch.cuda.is_available(): model.cuda() def get_sentiment(text, return_type='label'): """ Calculate sentiment of a text. `return_type` can be 'label', 'score' or 'proba' """ with torch.no_grad(): inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device) proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()[0] if return_type == 'label': return model.config.id2label[proba.argmax()] elif return_type == 'score': return proba.dot([-1, 0, 1]) return proba text = 'Какая гадость эта ваша заливная рыба!' # classify the text print(get_sentiment(text, 'label')) # negative # score the text on the scale from -1 (very negative) to +1 (very positive) print(get_sentiment(text, 'score')) # -0.5894946306943893 # calculate probabilities of all labels print(get_sentiment(text, 'proba')) # [0.7870447 0.4947824 0.19755007] ``` ## Training We trained the model on [the datasets collected by Smetanin](https://github.com/sismetanin/sentiment-analysis-in-russian). We have converted all training data into a 3-class format and have up- and downsampled the training data to balance both the sources and the classes. The training code is available as [a Colab notebook](https://gist.github.com/avidale/e678c5478086c1d1adc52a85cb2b93e6). The metrics on the balanced test set are the following: | Source | Macro F1 | | ----------- | ----------- | | SentiRuEval2016_banks | 0.83 | | SentiRuEval2016_tele | 0.74 | | kaggle_news | 0.66 | | linis | 0.50 | | mokoron | 0.98 | | rureviews | 0.72 | | rusentiment | 0.67 |