modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
blizrys/biobert-v1.1-finetuned-pubmedqa
blizrys
"2021-09-13T17:56:32Z"
44,992
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- tags: - generated_from_trainer datasets: - null metrics: - accuracy model-index: - name: biobert-v1.1-finetuned-pubmedqa results: - task: name: Text Classification type: text-classification metrics: - name: Accuracy type: accuracy value: 0.7 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-finetuned-pubmedqa This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 0.8810 | 0.56 | | No log | 2.0 | 114 | 0.8139 | 0.62 | | No log | 3.0 | 171 | 0.7963 | 0.68 | | No log | 4.0 | 228 | 0.7709 | 0.66 | | No log | 5.0 | 285 | 0.7931 | 0.64 | | No log | 6.0 | 342 | 0.7420 | 0.7 | | No log | 7.0 | 399 | 0.7654 | 0.7 | | No log | 8.0 | 456 | 0.7756 | 0.68 | | 0.5849 | 9.0 | 513 | 0.7605 | 0.68 | | 0.5849 | 10.0 | 570 | 0.7737 | 0.7 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
impira/layoutlm-document-qa
impira
"2023-03-18T00:54:24Z"
44,977
893
transformers
[ "transformers", "pytorch", "tf", "safetensors", "layoutlm", "document-question-answering", "pdf", "en", "license:mit", "endpoints_compatible", "region:us" ]
document-question-answering
"2022-08-07T21:07:19Z"
--- language: en license: mit pipeline_tag: document-question-answering tags: - layoutlm - document-question-answering - pdf widget: - text: "What is the invoice number?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" - text: "What is the purchase amount?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg" --- # LayoutLM for Visual Question Answering This is a fine-tuned version of the multi-modal [LayoutLM](https://aka.ms/layoutlm) model for the task of question answering on documents. It has been fine-tuned using both the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) and [DocVQA](https://www.docvqa.org/) datasets. ## Getting started with the model To run these examples, you must have [PIL](https://pillow.readthedocs.io/en/stable/installation.html), [pytesseract](https://pypi.org/project/pytesseract/), and [PyTorch](https://pytorch.org/get-started/locally/) installed in addition to [transformers](https://huggingface.co/docs/transformers/index). ```python from transformers import pipeline nlp = pipeline( "document-question-answering", model="impira/layoutlm-document-qa", ) nlp( "https://templates.invoicehome.com/invoice-template-us-neat-750px.png", "What is the invoice number?" ) # {'score': 0.9943977, 'answer': 'us-001', 'start': 15, 'end': 15} nlp( "https://miro.medium.com/max/787/1*iECQRIiOGTmEFLdWkVIH2g.jpeg", "What is the purchase amount?" ) # {'score': 0.9912159, 'answer': '$1,000,000,000', 'start': 97, 'end': 97} nlp( "https://www.accountingcoach.com/wp-content/uploads/2013/10/income-statement-example@2x.png", "What are the 2020 net sales?" ) # {'score': 0.59147286, 'answer': '$ 3,750', 'start': 19, 'end': 20} ``` **NOTE**: This model and pipeline was recently landed in transformers via [PR #18407](https://github.com/huggingface/transformers/pull/18407) and [PR #18414](https://github.com/huggingface/transformers/pull/18414), so you'll need to use a recent version of transformers, for example: ```bash pip install git+https://github.com/huggingface/transformers.git@2ef774211733f0acf8d3415f9284c49ef219e991 ``` ## About us This model was created by the team at [Impira](https://www.impira.com/).
dccuchile/bert-base-spanish-wwm-cased
dccuchile
"2024-01-18T01:47:12Z"
44,969
46
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "masked-lm", "es", "arxiv:1904.09077", "arxiv:1906.01502", "arxiv:1812.10464", "arxiv:1901.07291", "arxiv:1904.02099", "arxiv:1906.01569", "arxiv:1908.11828", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: - es tags: - masked-lm --- # BETO: Spanish BERT BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models. ## Download | | | | | |-|:--------:|:-----:|:----:| |BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) | |BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) | All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps. ## Benchmarks The following table shows some BETO results in the Spanish version of every task. We compare BETO (cased and uncased) with the Best Multilingual BERT results that we found in the literature (as of October 2019). The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods). References for all methods can be found [here](#references). |Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results | |-------|--------------:|--------------:|--------------------------:|-------------------------------:| |[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] | |[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] | |[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] | |[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] | |[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]| ## Example of use For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html). BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library. An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing). (We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉) ## Acknowledgments We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/) that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program. ## Citation [Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf) To cite this resource in a publication please use the following: ``` @inproceedings{CaneteCFP2020, title={Spanish Pre-Trained BERT Model and Evaluation Data}, author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge}, booktitle={PML4DC at ICLR 2020}, year={2020} } ``` ## License Disclaimer The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs. ## References * [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) * [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf) * [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf) * [4] [LASER](https://arxiv.org/abs/1812.10464) * [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf) * [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf) * [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf) * [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
fazahmz/nps-tag-classifier
fazahmz
"2022-09-26T12:24:39Z"
44,949
0
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
"2022-09-26T11:45:12Z"
Entry not found
meta-llama/Meta-Llama-Guard-2-8B
meta-llama
"2024-05-13T09:36:04Z"
44,943
239
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "arxiv:2312.06674", "arxiv:2403.13031", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-17T09:35:34Z"
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3 extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # Model Details Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2. <p align="center"> <img src="https://github.com/facebookresearch/PurpleLlama/raw/main/Llama-Guard2/llamaguard_example.png" width="800"/> </p> In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions. # Harm Taxonomy and Policy The model is trained to predict safety labels on the 11 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of hazards. <table align="center"> <thead> <tr> <th colspan="2">Harm categories</th> </tr> </thead> <tbody> <tr> <td>S1: Violent Crimes</td> <td>S2: Non-Violent Crimes</td> </tr> <tr> <td>S3: Sex-Related Crimes</td> <td>S4: Child Sexual Exploitation</td> </tr> <tr> <td>S5: Specialized Advice</td> <td>S6: Privacy</td> </tr> <tr> <td>S7: Intellectual Property</td> <td>S8: Indiscriminate Weapons </td> </tr> <tr> <td>S9: Hate</td> <td>S10: Suicide &amp; Self-Harm</td> </tr> <tr> <td>S11: Sexual Content</td> <td></td> </tr> </tbody> </table> **S1: Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2: Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3: Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4: Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5: Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S6: Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someone's physical, digital, or financial security **S7: Intellectual Property** Responses that may violate the intellectual property rights of any third party **S8: Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S9: Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S10: Suicide & Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S11: Sexual Content** Responses that contain erotica ## How to Use in `transformers` ```py from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-Guard-2-8B" device = "cuda" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=dtype, device_map=device) def moderate(chat): input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt").to(device) output = model.generate(input_ids=input_ids, max_new_tokens=100, pad_token_id=0) prompt_len = input_ids.shape[-1] return tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True) moderate([ {"role": "user", "content": "I forgot how to kill a process in Linux, can you help?"}, {"role": "assistant", "content": "Sure! To kill a process in Linux, you can use the kill command followed by the process ID (PID) of the process you want to terminate."}, ]) # `safe` ``` # Training and Evaluation Data We start with the training set of Llama Guard (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md#training-data)), and obtain labels on the Harm Taxonomy described above. To improve adaptability of the model to different prompts, we train on hard samples, which are obtained by taking an existing sample and prompting Llama2 70B to produce an alternate policy description that will flip the label of the given sample. We report metrics for various models and APIs on our validation set, which is obtained from combining the validation set of Llama Guard v1 and held-out samples from the additional Llama 3 safety data. We compare performance on our internal test set, as well as on open datasets like [XSTest](https://github.com/paul-rottger/exaggerated-safety?tab=readme-ov-file#license), [OpenAI moderation](https://github.com/openai/moderation-api-release), and [BeaverTails](https://github.com/PKU-Alignment/beavertails). We find that there is overlap between our training set and the BeaverTails-30k test split. Since both our internal test set and BeaverTails use prompts from the Anthropic's [hh-rlhf dataset](https://github.com/anthropics/hh-rlhf) as a starting point for curating data, it is possible that different splits of Anthropic were used while creating the two datasets. Therefore to prevent leakage of signal between our train set and the BeaverTails-30k test set, we create our own BeaverTails-30k splits based on the Anthropic train-test splits used for creating our internal sets. *Note on evaluations*: As discussed in the Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. # Model Performance We evaluate the performance of Llama Guard 2 and compare it with Llama Guard and popular content moderation APIs such as Azure, OpenAI Moderation, and Perspective. We use the token probability of the first output token (i.e. safe/unsafe) as the score for classification. For obtaining a binary classification decision from the score, we use a threshold of 0.5. Llama Guard 2 improves over Llama Guard, and outperforms other approaches on our internal test set. Note that we manage to achieve great performance while keeping a low false positive rate as we know that over-moderation can impact user experience when building LLM-applications. <div align="center"> | **Model** | **F1 ↑** | **AUPRC ↑** | **False Positive<br>Rate ↓** | |--------------------------|:------:|:---------:|:-----------------------:| | Llama Guard\* | 0.665 | <ins>0.854</ins> | 0.027 | | Llama Guard 2 | **0.915** | **0.974** | 0.040 | | GPT4 | <ins>0.796</ins> | N/A | 0.151 | | OpenAI Moderation API | 0.347 | 0.669 | 0.030 | | Azure Content Safety API | 0.519 | N/A | 0.245 | | Perspective API | 0.265 | 0.586 | 0.046 | <small> Table 1: Comparison of performance of various approaches measured on our internal test set. <br><small><small> *The performance of Llama Guard is lower on our new test set due to expansion of the number of harm categories from 6 to 11, which is not aligned to what Llama Guard was trained on. </small></small></small> </div> <br> <div align="center"> | **Category** | **False Negative Rate\* ↓** | **False Positive Rate ↓** | |------------------------|:--------------------------:|:-------------------------:| | Violent Crimes | 0.042 | 0.002 | | Privacy | 0.057 | 0.004 | | Non-Violent Crimes | 0.082 | 0.009 | | Intellectual Property | 0.099 | 0.004 | | Hate | 0.190 | 0.005 | | Specialized Advice | 0.192 | 0.009 | | Sexual Content | 0.229 | 0.004 | | Indiscriminate Weapons | 0.263 | 0.001 | | Child Exploitation | 0.267 | 0.000 | | Sex Crimes | 0.275 | 0.002 | | Self-Harm | 0.277 | 0.002 | <small> Table 2: Category-wise breakdown of false negative rate and false positive rate for Llama Guard 2 on our internal benchmark for response classification with safety labels from the ML Commons taxonomy.<br><small><small>*The binary safe/unsafe label is used to compute categorical FNR by using the true categories. We do not penalize the model while computing FNR for cases where the model predicts the correct overall label but an incorrect categorical label.</small></small></small> </div> We also report performance on OSS safety datasets, though we note that the policy used for assigning safety labels is not aligned with the policy used while training Llama Guard 2. Still, Llama Guard 2 provides a superior tradeoff between f1 score and False Positive Rate on the XSTest and OpenAI Moderation datasets, demonstrating good adaptability to other policies. The BeaverTails dataset has a lower bar for a sample to be considered unsafe compared to Llama Guard 2's policy. The policy and training data of MDJudge [4] is more aligned with this dataset and we see that it performs better on them as expected (at the cost of a higher FPR). GPT-4 achieves high recall on all of the sets but at the cost of very high FPR (9-25%), which could hurt its ability to be used as a safeguard for practical applications. <table align="center"> <thead> <tr> <th></th> <th colspan="3">(F1 ↑ / False Positive Rate ↓)</th> </tr> </thead> <tbody> <tr> <td></td> <td><center>False Refusals<br>(XSTest)</center></td> <td><center>OpenAI policy<br>(OpenAI Mod)</center></td> <td><center>BeaverTails policy<br>(BeaverTails-30k)</center></td> </tr> <tr> <td>Llama Guard</td> <td>0.737 / 0.079</td> <td>0.737 / 0.079</td> <td>0.599 / 0.035</td> </tr> <tr> <td>Llama Guard 2</td> <td>0.884 / 0.084</td> <td>0.807 / 0.060</td> <td>0.736 / 0.059</td> </tr> <tr> <td>MDJudge</td> <td>0.856 / 0.172</td> <td>0.768 / 0.212</td> <td>0.849 / 0.098</td> </tr> <tr> <td>GPT4</td> <td>0.895 / 0.128</td> <td>0.842 / 0.092</td> <td>0.802 / 0.256</td> </tr> <tr> <td>OpenAI Mod API</td> <td>0.576 / 0.040</td> <td>0.788 / 0.156</td> <td>0.284 / 0.056</td> </tr> </tbody> </table> <div align="center"> <small>Table 3: Comparison of performance of various approaches measured on our internal test set for response classification. <br>NOTE: The policy used for training Llama Guard does not align with those used for labeling these datasets. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate across these datasets, demonstrating strong adaptability to other policies.</small> </div> <br> We hope to provide developers with a high-performing moderation solution for most use cases by aligning Llama Guard 2 taxonomy with MLCommons standard. But as outlined in our Responsible Use Guide, each use case requires specific safety considerations and we encourage developers to tune Llama Guard 2 for their own use case to achieve better moderation for their custom policies. As an example of how Llama Guard 2's performance may change, we train on the BeaverTails training dataset and compare against MDJudge (which was trained on BeaverTails among others). <div align="center"> <br> | **Model** | **F1 ↑** | **False Positive Rate ↓** | |:---------------------------:|:--------:|:-------------------------:| | Llama Guard 2 | 0.736 | 0.059 | | MDJudge | <ins>0.849</ins> | 0.098 | | Llama Guard 2 + BeaverTails | **0.852** | 0.101 | <small>Table 4: Comparison of performance on BeaverTails-30k.</small> </div> # Limitations There are some limitations associated with Llama Guard 2. First, Llama Guard 2 itself is an LLM fine-tuned on Llama 3. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Second, Llama Guard 2 is finetuned for safety classification only (i.e. to generate "safe" or "unsafe"), and is not designed for chat use cases. However, since it is an LLM, it can still be prompted with any text to obtain a completion. Lastly, as an LLM, Llama Guard 2 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. However, with the help of external components (e.g., KNN, perplexity filter), recent work (e.g., [3]) demonstrates that Llama Guard is able to detect harmful content reliably. **Note on Llama Guard 2's policy** Llama Guard 2 supports 11 out of the 13 categories included in the [MLCommons AI Safety](https://mlcommons.org/working-groups/ai-safety/ai-safety/) taxonomy. The Election and Defamation categories are not addressed by Llama Guard 2 as moderating these harm categories requires access to up-to-date, factual information sources and the ability to determine the veracity of a particular output. To support the additional categories, we recommend using other solutions (e.g. Retrieval Augmented Generation) in tandem with Llama Guard 2 to evaluate information correctness. # Citation ``` @misc{metallamaguard2, author = {Llama Team}, title = {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year = {2024} } ``` # References [1] [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob) [2] [Llama Guard Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md) [3] [RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content](https://arxiv.org/pdf/2403.13031.pdf) [4] [MDJudge for Salad-Bench](https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1)
DevQuasar/Hermes-2-Theta-Llama-3-70B-GGUF
DevQuasar
"2024-06-24T05:03:25Z"
44,736
0
null
[ "gguf", "text-generation", "license:llama3", "region:us" ]
text-generation
"2024-06-21T17:04:56Z"
--- license: llama3 pipeline_tag: text-generation --- Quantized version of [NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)
NousResearch/Nous-Hermes-2-Mistral-7B-DPO
NousResearch
"2024-04-30T18:10:15Z"
44,664
154
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-18T10:50:26Z"
--- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mistral-7B-DPO results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Nous Hermes 2 - Mistral 7B - DPO ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/PDleZIZK3vE3ATfXRRySv.png) ## Model Description Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA. The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5). ## Thank you to FluidStack for sponsoring compute for this model! ## Example Outputs ### Describing Weather Patterns in Paris: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ZX-stQY80edj2Y9ButCzn.png) ### Making JSON Nested Lists ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/3wtVqDOA1S_d48FJtwero.png) ### Roleplaying as a Toaist Master ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/NfxBxrjbTGEsUcR8nOALb.png) ## Benchmark Results Nous-Hermes 2 DPO on Mistral 7B is an improvement across the board on the benchmarks below compared to the original OpenHermes 2.5 model, as shown here: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/O-LLTr1K1FYbzscMr4lbE.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5776|± |0.0144| | | |acc_norm|0.6220|± |0.0142| |arc_easy | 0|acc |0.8380|± |0.0076| | | |acc_norm|0.8245|± |0.0078| |boolq | 1|acc |0.8624|± |0.0060| |hellaswag | 0|acc |0.6418|± |0.0048| | | |acc_norm|0.8249|± |0.0038| |openbookqa | 0|acc |0.3420|± |0.0212| | | |acc_norm|0.4540|± |0.0223| |piqa | 0|acc |0.8177|± |0.0090| | | |acc_norm|0.8264|± |0.0088| |winogrande | 0|acc |0.7466|± |0.0122| ``` Average: 73.72 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 43.63 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.94 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.3892|± |0.0171| | | |mc2 |0.5642|± |0.0153| ``` # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mistral-7B-DPO', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # How to cite: ```bibtext @misc{Nous-Hermes-2-Mistral-7B-DPO, url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)}, title={Nous Hermes 2 Mistral 7B DPO}, author={"Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
orgcatorg/xlm-v-base-ner
orgcatorg
"2024-03-12T02:16:13Z"
44,660
1
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2301.10472", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-02-27T08:09:23Z"
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit --- # XLM-V (Base-sized model) XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R). It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa. **Disclaimer**: The team releasing XLM-V did not write a model card for this model so this model card has been written by the Hugging Face team. [This repository](https://github.com/stefan-it/xlm-v-experiments) documents all necessary integeration steps. ## Model description From the abstract of the XLM-V paper: > Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. > As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. > This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. > In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by > de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity > to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically > more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, > a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we > tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and > named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER). ## Usage You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='facebook/xlm-v-base') >>> unmasker("Paris is the <mask> of France.") [{'score': 0.9286897778511047, 'token': 133852, 'token_str': 'capital', 'sequence': 'Paris is the capital of France.'}, {'score': 0.018073994666337967, 'token': 46562, 'token_str': 'Capital', 'sequence': 'Paris is the Capital of France.'}, {'score': 0.013238662853837013, 'token': 8696, 'token_str': 'centre', 'sequence': 'Paris is the centre of France.'}, {'score': 0.010450296103954315, 'token': 550136, 'token_str': 'heart', 'sequence': 'Paris is the heart of France.'}, {'score': 0.005028395913541317, 'token': 60041, 'token_str': 'center', 'sequence': 'Paris is the center of France.'}] ``` ## Bias, Risks, and Limitations Please refer to the model card of [XLM-R](https://huggingface.co/xlm-roberta-base), because XLM-V has a similar architecture and has been trained on similar training data. ### BibTeX entry and citation info ```bibtex @ARTICLE{2023arXiv230110472L, author = {{Liang}, Davis and {Gonen}, Hila and {Mao}, Yuning and {Hou}, Rui and {Goyal}, Naman and {Ghazvininejad}, Marjan and {Zettlemoyer}, Luke and {Khabsa}, Madian}, title = "{XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language, Computer Science - Machine Learning}, year = 2023, month = jan, eid = {arXiv:2301.10472}, pages = {arXiv:2301.10472}, doi = {10.48550/arXiv.2301.10472}, archivePrefix = {arXiv}, eprint = {2301.10472}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv230110472L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } ```
timm/vit_large_patch14_dinov2.lvd142m
timm
"2024-02-09T18:00:49Z"
44,656
5
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "arxiv:2304.07193", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2023-05-09T21:05:24Z"
--- license: apache-2.0 library_name: timm tags: - image-feature-extraction - timm --- # Model card for vit_large_patch14_dinov2.lvd142m A Vision Transformer (ViT) image feature model. Pretrained on LVD-142M with self-supervised DINOv2 method. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.4 - GMACs: 507.1 - Activations (M): 1058.8 - Image size: 518 x 518 - **Papers:** - DINOv2: Learning Robust Visual Features without Supervision: https://arxiv.org/abs/2304.07193 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Original:** https://github.com/facebookresearch/dinov2 - **Pretrain Dataset:** LVD-142M ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch14_dinov2.lvd142m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch14_dinov2.lvd142m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1370, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr}, journal={arXiv:2304.07193}, year={2023} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers
PixArt-alpha
"2024-04-07T05:35:07Z"
44,649
28
diffusers
[ "diffusers", "safetensors", "license:openrail", "diffusers:PixArtAlphaPipeline", "region:us" ]
text-to-image
"2024-04-01T09:19:10Z"
--- license: openrail ---
typeform/distilbert-base-uncased-mnli
typeform
"2023-03-22T08:49:00Z"
44,640
36
transformers
[ "transformers", "pytorch", "tf", "safetensors", "distilbert", "text-classification", "zero-shot-classification", "en", "dataset:multi_nli", "arxiv:1910.09700", "arxiv:2105.09680", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:05Z"
--- language: en pipeline_tag: zero-shot-classification tags: - distilbert datasets: - multi_nli metrics: - accuracy --- # DistilBERT base model (uncased) ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) ## Model Details **Model Description:** This is the [uncased DistilBERT model](https://huggingface.co/distilbert-base-uncased) fine-tuned on [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MNLI) dataset for the zero-shot classification task. - **Developed by:** The [Typeform](https://www.typeform.com/) team. - **Model Type:** Zero-Shot Classification - **Language(s):** English - **License:** Unknown - **Parent Model:** See the [distilbert base uncased model](https://huggingface.co/distilbert-base-uncased) for more information about the Distilled-BERT base model. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("typeform/distilbert-base-uncased-mnli") model = AutoModelForSequenceClassification.from_pretrained("typeform/distilbert-base-uncased-mnli") ``` ## Uses This model can be used for text classification tasks. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data This model of DistilBERT-uncased is pretrained on the Multi-Genre Natural Language Inference [(MultiNLI)](https://huggingface.co/datasets/multi_nli) corpus. It is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. This model is also **not** case-sensitive, i.e., it does not make a difference between "english" and "English". #### Training Procedure Training is done on a [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) AWS EC2 with the following hyperparameters: ``` $ run_glue.py \ --model_name_or_path distilbert-base-uncased \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 16 \ --learning_rate 2e-5 \ --num_train_epochs 5 \ --output_dir /tmp/distilbert-base-uncased_mnli/ ``` ## Evaluation #### Evaluation Results When fine-tuned on downstream tasks, this model achieves the following results: - **Epoch = ** 5.0 - **Evaluation Accuracy =** 0.8206875508543532 - **Evaluation Loss =** 0.8706700205802917 - ** Evaluation Runtime = ** 17.8278 - ** Evaluation Samples per second = ** 551.498 MNLI and MNLI-mm results: | Task | MNLI | MNLI-mm | |:----:|:----:|:----:| | | 82.0 | 82.0 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). **Hardware Type:** 1 NVIDIA Tesla V100 GPUs **Hours used:** Unknown **Cloud Provider:** AWS EC2 P3 **Compute Region:** Unknown **Carbon Emitted:** (Power consumption x Time x Carbon produced based on location of power grid): Unknown
mradermacher/dolphin-2.2-yi-34b-200k-GGUF
mradermacher
"2024-06-26T06:36:58Z"
44,615
0
transformers
[ "transformers", "gguf", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/samantha-data", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "base_model:cognitivecomputations/dolphin-2.2-yi-34b-200k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T04:19:47Z"
--- base_model: cognitivecomputations/dolphin-2.2-yi-34b-200k datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/samantha-data - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/cognitivecomputations/dolphin-2.2-yi-34b-200k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.2-yi-34b-200k-GGUF/resolve/main/dolphin-2.2-yi-34b-200k.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fluently/Fluently-XL-v2
fluently
"2024-05-24T21:22:25Z"
44,495
22
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "sdxl", "fluetnly-xl", "fluently", "trained", "text-to-image", "dataset:ehristoforu/midjourney-images", "dataset:ehristoforu/dalle-3-images", "dataset:ehristoforu/fav_images", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-03-07T16:37:20Z"
--- license: other license_name: fluently-license license_link: https://huggingface.co/spaces/fluently/License datasets: - ehristoforu/midjourney-images - ehristoforu/dalle-3-images - ehristoforu/fav_images library_name: diffusers pipeline_tag: text-to-image base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - safetensors - stable-diffusion - sdxl - fluetnly-xl - fluently - trained inference: parameters: num_inference_steps: 25 guidance_scale: 5 negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation" --- # **Fluently XL** V2 - the best XL-model ![preview](images/preview.png) [>>> Run in **RunDiffusion** <<<](https://app.rundiffusion.com/login?modelProvider=civitai&utm_source=civitai&via=civitai&redirect=%2F%3FmodelProvider%3Dcivitai%26utm_source%3Dcivitai%26via%3Dcivitai) Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true. ## About this model The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others. ### Features - Correct anatomy - Art and realism in one - Controling contrast - Great nature - Great faces without AfterDetailer ### More info Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it. ## Using Optimal parameters in Automatic1111/ComfyUI: - Sampling steps: 20-35 - Sampler method: Euler a/Euler - CFG Scale: 4-6.5 ## End Let's remove models that copy each other from the top and put one that is actually developing, thank you)
Qdrant/bge-base-en-v1.5-onnx-Q
Qdrant
"2024-01-16T08:00:14Z"
44,487
0
transformers
[ "transformers", "onnx", "bert", "feature-extraction", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2024-01-16T07:59:44Z"
Entry not found
Falconsai/text_summarization
Falconsai
"2024-02-17T07:55:14Z"
44,469
162
transformers
[ "transformers", "pytorch", "coreml", "onnx", "safetensors", "t5", "text2text-generation", "summarization", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
"2023-10-21T00:53:53Z"
--- license: apache-2.0 language: - en pipeline_tag: summarization widget: - text: >- Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name Hugging Face was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the Transformers library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. example_title: Summarization Example 1 --- # Model Card: Fine-Tuned T5 Small for Text Summarization ## Model Description The **Fine-Tuned T5 Small** is a variant of the T5 transformer model, designed for the task of text summarization. It is adapted and fine-tuned to generate concise and coherent summaries of input text. The model, named "t5-small," is pre-trained on a diverse corpus of text data, enabling it to capture essential information and generate meaningful summaries. Fine-tuning is conducted with careful attention to hyperparameter settings, including batch size and learning rate, to ensure optimal performance for text summarization. During the fine-tuning process, a batch size of 8 is chosen for efficient computation and learning. Additionally, a learning rate of 2e-5 is selected to balance convergence speed and model optimization. This approach guarantees not only rapid learning but also continuous refinement during training. The fine-tuning dataset consists of a variety of documents and their corresponding human-generated summaries. This diverse dataset allows the model to learn the art of creating summaries that capture the most important information while maintaining coherence and fluency. The goal of this meticulous training process is to equip the model with the ability to generate high-quality text summaries, making it valuable for a wide range of applications involving document summarization and content condensation. ## Intended Uses & Limitations ### Intended Uses - **Text Summarization**: The primary intended use of this model is to generate concise and coherent text summaries. It is well-suited for applications that involve summarizing lengthy documents, news articles, and textual content. ### How to Use To use this model for text summarization, you can follow these steps: ```python from transformers import pipeline summarizer = pipeline("summarization", model="Falconsai/text_summarization") ARTICLE = """ Hugging Face: Revolutionizing Natural Language Processing Introduction In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry. The Birth of Hugging Face Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name "Hugging Face" was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology. Transformative Innovations Hugging Face is best known for its open-source contributions, particularly the "Transformers" library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis. Key Contributions: 1. **Transformers Library:** The Transformers library provides a unified interface for more than 50 pre-trained models, simplifying the development of NLP applications. It allows users to fine-tune these models for specific tasks, making it accessible to a wider audience. 2. **Model Hub:** Hugging Face's Model Hub is a treasure trove of pre-trained models, making it simple for anyone to access, experiment with, and fine-tune models. Researchers and developers around the world can collaborate and share their models through this platform. 3. **Hugging Face Transformers Community:** Hugging Face has fostered a vibrant online community where developers, researchers, and AI enthusiasts can share their knowledge, code, and insights. This collaborative spirit has accelerated the growth of NLP. Democratizing AI Hugging Face's most significant impact has been the democratization of AI and NLP. Their commitment to open-source development has made powerful AI models accessible to individuals, startups, and established organizations. This approach contrasts with the traditional proprietary AI model market, which often limits access to those with substantial resources. By providing open-source models and tools, Hugging Face has empowered a diverse array of users to innovate and create their own NLP applications. This shift has fostered inclusivity, allowing a broader range of voices to contribute to AI research and development. Industry Adoption The success and impact of Hugging Face are evident in its widespread adoption. Numerous companies and institutions, from startups to tech giants, leverage Hugging Face's technology for their AI applications. This includes industries as varied as healthcare, finance, and entertainment, showcasing the versatility of NLP and Hugging Face's contributions. Future Directions Hugging Face's journey is far from over. As of my last knowledge update in September 2021, the company was actively pursuing research into ethical AI, bias reduction in models, and more. Given their track record of innovation and commitment to the AI community, it is likely that they will continue to lead in ethical AI development and promote responsible use of NLP technologies. Conclusion Hugging Face's story is one of transformation, collaboration, and empowerment. Their open-source contributions have reshaped the NLP landscape and democratized access to AI. As they continue to push the boundaries of AI research, we can expect Hugging Face to remain at the forefront of innovation, contributing to a more inclusive and ethical AI future. Their journey reminds us that the power of open-source collaboration can lead to groundbreaking advancements in technology and bring AI within the reach of many. """ print(summarizer(ARTICLE, max_length=1000, min_length=30, do_sample=False)) >>> [{'summary_text': 'Hugging Face has emerged as a prominent and innovative force in NLP . From its inception to its role in democratizing AI, the company has left an indelible mark on the industry . The name "Hugging Face" was chosen to reflect the company\'s mission of making AI models more accessible and friendly to humans .'}] ``` Limitations Specialized Task Fine-Tuning: While the model excels at text summarization, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results. Training Data The model's training data includes a diverse dataset of documents and their corresponding human-generated summaries. The training process aims to equip the model with the ability to generate high-quality text summaries effectively. Training Stats - Evaluation Loss: 0.012345678901234567 - Evaluation Rouge Score: 0.95 (F1) - Evaluation Runtime: 2.3456 - Evaluation Samples per Second: 1234.56 - Evaluation Steps per Second: 45.678 Responsible Usage It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content. References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
sentence-transformers/gtr-t5-large
sentence-transformers
"2024-03-27T10:41:50Z"
44,458
32
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "t5", "feature-extraction", "sentence-similarity", "en", "arxiv:2112.07899", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity pipeline_tag: sentence-similarity --- # sentence-transformers/gtr-t5-large This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search. This model was converted from the Tensorflow model [gtr-large-1](https://tfhub.dev/google/gtr/gtr-large/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-large model. The weights are stored in FP16. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/gtr-t5-large') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-large) ## Citing & Authors If you find this model helpful, please cite the respective publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
google/bigbird-roberta-base
google
"2021-06-02T14:30:54Z"
44,414
44
transformers
[ "transformers", "pytorch", "jax", "big_bird", "pretraining", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:cc_news", "arxiv:2007.14062", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - cc_news --- # BigBird base model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
naver/efficient-splade-V-large-doc
naver
"2022-07-08T11:37:17Z"
44,337
4
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "splade", "query-expansion", "document-expansion", "bag-of-words", "passage-retrieval", "knowledge-distillation", "document encoder", "en", "dataset:ms_marco", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-07-05T09:45:02Z"
--- license: cc-by-nc-sa-4.0 language: "en" tags: - splade - query-expansion - document-expansion - bag-of-words - passage-retrieval - knowledge-distillation - document encoder datasets: - ms_marco --- ## Efficient SPLADE Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **doc** one, please also download the **query** one (https://huggingface.co/naver/efficient-splade-V-large-query). For additional details, please visit: * paper: https://dl.acm.org/doi/10.1145/3477495.3531833 * code: https://github.com/naver/splade | | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms | --- | --- | --- | --- | --- | | `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3 | `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7 ## Citation If you use our checkpoint, please cite our work (need to update): ``` @inproceedings{10.1145/3477495.3531833, author = {Lassance, Carlos and Clinchant, St\'{e}phane}, title = {An Efficiency Study for SPLADE Models}, year = {2022}, isbn = {9781450387323}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3477495.3531833}, doi = {10.1145/3477495.3531833}, abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.}, booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval}, pages = {2220–2226}, numpages = {7}, keywords = {splade, latency, information retrieval, sparse representations}, location = {Madrid, Spain}, series = {SIGIR '22} } ```
state-spaces/mamba-130m-hf
state-spaces
"2024-03-06T00:39:29Z"
44,231
32
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-06T00:07:35Z"
--- library_name: transformers tags: [] --- # Mamba <!-- Provide a quick summary of what the model is/does. --> This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo. # Usage You need to install `transformers` from `main` until `transformers=4.39.0` is released. ```bash pip install git+https://github.com/huggingface/transformers@main ``` We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: ```bash pip install causal-conv1d>=1.2.0 pip install mamba-ssm ``` If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used. ## Generation You can use the classic `generate` API: ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["Hey how are you doing?\n\nI'm so glad you're here."] ``` ## PEFT finetuning example In order to finetune using the `peft` library, we recommend keeping the model in float32! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
touchtech/fashion-images-gender-age-vit-large-patch16-224-in21k-v3
touchtech
"2023-09-13T02:30:46Z"
44,204
1
transformers
[ "transformers", "pytorch", "vit", "image-classification", "vision", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-large-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-09-12T20:29:52Z"
--- license: apache-2.0 base_model: google/vit-large-patch16-224-in21k tags: - image-classification - vision - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: fashion-images-gender-age-vit-large-patch16-224-in21k-v3 results: - task: name: Image Classification type: image-classification dataset: name: touchtech/fashion-images-gender-age type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9959630911188004 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fashion-images-gender-age-vit-large-patch16-224-in21k-v3 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the touchtech/fashion-images-gender-age dataset. It achieves the following results on the evaluation set: - Loss: 0.0223 - Accuracy: 0.9960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1868 | 1.0 | 2457 | 0.0547 | 0.9853 | | 0.1209 | 2.0 | 4914 | 0.0401 | 0.9888 | | 0.1027 | 3.0 | 7371 | 0.0262 | 0.9937 | | 0.0654 | 4.0 | 9828 | 0.0223 | 0.9960 | | 0.0542 | 5.0 | 12285 | 0.0273 | 0.9948 | ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
TitanML/tiny-mixtral
TitanML
"2024-04-24T13:35:00Z"
44,120
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-24T13:34:32Z"
Entry not found
AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon
AdamCodd
"2023-11-10T17:35:29Z"
44,101
5
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "distilbert", "text-classification", "dataset:amazon_polarity", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-10-06T23:02:11Z"
--- license: apache-2.0 datasets: - amazon_polarity base_model: distilbert-base-uncased model-index: - name: distilbert-base-uncased-finetuned-sentiment-amazon results: - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: sentiment args: default metrics: - type: accuracy value: 0.961 name: Accuracy - type: loss value: 0.116 name: Loss - type: f1 value: 0.960 name: F1 - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: amazon_polarity config: amazon_polarity split: test metrics: - type: accuracy value: 0.94112 name: Accuracy verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzlmMzdhYjNmN2U0NDBkM2U5ZDgwNzc3YjE1OGE4MWUxMDY1N2U0ODc0YzllODE5ODIyMzdkOWFhNzVjYmI5MyIsInZlcnNpb24iOjF9.3nlcLa4IpPQtklp7_U9XzC__Q_JVf_cWs6JVVII8trhX5zg_q9HEyQOQs4sRf6O-lIJg8zb3mgobZDJShuSJAQ - type: precision value: 0.9321570625232675 name: Precision verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjI2MDY4NGNlYjhjMGMxODBiNTc2ZjM5YzY1NjkxNTU4MDA2ZDIyY2QyZjUyZmE4YWY0N2Y1ODU5YTc2ZDM0NiIsInZlcnNpb24iOjF9.egEikTa2UyHV6SAGkHJKaa8FRwGHoZmJRCmqUQaJqeF5yxkz2V-WeCHoWDrCXsHCbXEs8UhLlyo7Lr83BPfkBg - type: recall value: 0.95149 name: Recall verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E3M2Y3MDU4ZTM2YjdlZjQ0NTY3NGYwMmQ3NTk5ZmZkZWUwZWZiZDZjNjk2ZWE5MmY4MmZiM2FmN2U2M2QyNCIsInZlcnNpb24iOjF9.4VNbiWRmSee4cxuIZ5m7bN30i4BpK7xtHQ1BF8AuFIXkWQgzOmGdX35bLhLGWW8KL3ClA4RDPVBKYCIrw0YUBw - type: auc value: 0.9849019044624999 name: AUC verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkwODk2ZTUwOTViNjBhYTU0ODk1MDA3MDY1NDkyZDc2YmRlNTQzNDE3YmE3YTVkYjNhN2JmMDAxZWQ0NjUxZSIsInZlcnNpb24iOjF9.YEr6OhqOL7QnqYqjUTQFMdkgU_uS1-vVnkJtn_-1UwSoX754UV_bL9S9KSH3DX4m5QFoRXdZxfeOocm1JbzaCA - type: f1 value: 0.9417243188138998 name: F1 verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIyMmViNTQ3ZGU0M2I5ZmRjOGI1OWMwZGEwYmE5OGU5YTZlZTkzZjdkOTQ4YzJmOTc2MDliMDY4NDQ1NGRlNyIsInZlcnNpb24iOjF9.p05MGHTfHTAzp4u-qfiIn6Zmh5c3TW_uwjXWgbb982pL_oCILQb6jFXqhPpWXL321fPye7qaUVbGhcTJd8sdCA - type: loss value: 0.16342754662036896 name: loss verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgxMDc4M2IxYjhkNjRhZmYyNzY1MTNkNzhmYjk2NmU1NjFiOTk1NDIzNzI1ZGU3MDYyYjQ2YmQ1NTI2N2NhMyIsInZlcnNpb24iOjF9.Zuf0nzn8XdvwRChKtE9CwJ0pgpc6Zey6oTR3jRiSkvNY2sNbo2bvAgFimGzgGYkDvRvYkTCXzCyxdb27l3QnAg --- # distilbert-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a subset of the [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity). <b>[Update 10/10/23]</b> The model has been retrained on a larger part of the dataset with an improvement on the loss, f1 score and accuracy. It achieves the following results on the evaluation set: - Loss: 0.116 - Accuracy: 0.961 - F1_score: 0.960 ## Model description This sentiment classifier has been trained on 360_000 samples for the training set, 40_000 samples for the validation set and 40_000 samples for the test set. ## Intended uses & limitations ```python from transformers import pipeline # Create the pipeline sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon') # Now you can use the pipeline to get the sentiment result = sentiment_classifier("This product doesn't fit me at all.") print(result) #[{'label': 'negative', 'score': 0.9994848966598511}] ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1270 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 2 - weight_decay: 0.01 ### Training results (Previous results before retraining from the model evaluator) | key | value | | --- | ----- | | eval_accuracy | 0.94112 | | eval_auc | 0.9849 | | eval_f1_score | 0.9417 | | eval_precision | 0.9321 | | eval_recall | 0.95149 | ### Framework versions - Transformers 4.34.0 - Pytorch lightning 2.0.9 - Tokenizers 0.14.0 If you want to support me, you can [here](https://ko-fi.com/adamcodd).
facebook/wav2vec2-large-960h
facebook
"2022-04-05T16:40:42Z"
43,954
20
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Large-960h [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 2.8 | 6.3 |
google/siglip-large-patch16-256
google
"2024-01-19T23:33:55Z"
43,804
8
transformers
[ "transformers", "safetensors", "siglip", "zero-shot-image-classification", "vision", "arxiv:2303.15343", "arxiv:2209.06794", "license:apache-2.0", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2024-01-08T12:48:50Z"
--- license: apache-2.0 tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # SigLIP (large-sized model) SigLIP model pre-trained on WebLi at resolution 256x256. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in [this repository](https://github.com/google-research/big_vision). Disclaimer: The team releasing SigLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes. A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713). ## Intended uses & limitations You can use the raw model for tasks like zero-shot image classification and image-text retrieval. See the [model hub](https://huggingface.co/models?search=google/siglip) to look for other versions on a task that interests you. ### How to use Here is how to use this model to perform zero-shot image classification: ```python from PIL import Image import requests from transformers import AutoProcessor, AutoModel import torch model = AutoModel.from_pretrained("google/siglip-base-patch16-256") processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-256") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a photo of 2 cats", "a photo of 2 dogs"] inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) # these are the probabilities print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") ``` Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user: ```python from transformers import pipeline from PIL import Image import requests # load pipe image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-256") # load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # inference outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"]) outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs] print(outputs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/siglip.html#). ## Training procedure ### Training data SigLIP is pre-trained on the English image-text pairs of the WebLI dataset [(Chen et al., 2023)](https://arxiv.org/abs/2209.06794). ### Preprocessing Images are resized/rescaled to the same resolution (256x256) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). Texts are tokenized and padded to the same length (64 tokens). ### Compute The model was trained on 16 TPU-v4 chips for three days. ## Evaluation results Evaluation of SigLIP compared to CLIP is shown below (taken from the paper). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip_table.jpeg" alt="drawing" width="600"/> ### BibTeX entry and citation info ```bibtex @misc{zhai2023sigmoid, title={Sigmoid Loss for Language Image Pre-Training}, author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer}, year={2023}, eprint={2303.15343}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
ALINEAR/albert-japanese-v2
ALINEAR
"2020-05-04T13:20:53Z"
43,783
2
transformers
[ "transformers", "pytorch", "albert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
Entry not found
bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2
bartowski
"2024-04-22T21:06:00Z"
43,714
1
null
[ "two stage dpo", "dpo", "text-generation", "de", "en", "license:other", "region:us" ]
text-generation
"2024-04-22T21:05:59Z"
--- language: - de - en tags: - two stage dpo - dpo license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Llama-3-SauerkrautLM-8b-Instruct Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2 Llama-3-SauerkrautLM-8b-Instruct-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2 --revision 6_5 --local-dir Llama-3-SauerkrautLM-8b-Instruct-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2 --revision 6_5 --local-dir Llama-3-SauerkrautLM-8b-Instruct-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Lin-Chen/ShareCaptioner-Video
Lin-Chen
"2024-06-11T11:07:00Z"
43,699
9
transformers
[ "transformers", "pytorch", "internlmxcomposer2", "text-generation", "conversational", "custom_code", "dataset:ShareGPT4Video/ShareGPT4Video", "arxiv:2406.04325", "autotrain_compatible", "region:us" ]
text-generation
"2024-06-06T16:26:38Z"
--- inference: false datasets: - ShareGPT4Video/ShareGPT4Video --- <br> <br> # ShareCaptioner-Video Model Card ## Model details **Model type:** ShareCaptioner-Video is an open-source captioner fine-tuned on GPT4V-assisted [ShareGPT4Video](https://huggingface.co/datasets/Lin-Chen/ShareGPT4Video) detailed caption data with supporting various durations, aspect ratios, and resolutions of videos. ShareCaptioner-Video is based on the [InternLM-Xcomposer2-4KHD](https://github.com/InternLM/InternLM-XComposer) model. ShareCaptaioner-Video features 4 roles: - **Fast Captioning:** The model employs an image-grid format for direct video captioning, providing rapid generation speeds that are ideal for short videos. In practice, we concatenate all the keyframes of a video into a vertically elongated image and train the model on a caption task. - **Sliding Captioning:** The model supports streaming captioning in a differential sliding-window format, yielding high-quality captions that are suitable for long videos. We take the two adjacent keyframes alongside the previous differential caption as input, and train the model to describe the events occurring between them. - **Clip Summarizing:** The model can swiftly summarize any clip from ShareGPT4Video or videos that have undergone the differential sliding-window captioning process, eliminating the need to re-process frames. We use all the differential descriptions as input, and the output is the video caption. - **Prompt Re-Captioning:** The model can rephrase prompts input by users who prefer specific video generation areas, ensuring that T2VMs trained on high-quality video-caption data maintain format alignment during inference with their training. In practice, we use GPT-4 to generate Sora-style prompts for our dense captions, and we train the re-captioning task in reverse, i.e., by using the generated prompt as input and the dense caption as the training target. **Model date:** ShareCaptioner was trained in May 2024. **Paper or resources for more information:** [[Project](https://ShareGPT4Video.github.io/)] [[Paper]()] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4Video)] ## Intended use **Primary intended uses:** The primary use of ShareCaptioner-Video is about producing high-quality video captions. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Finetuning dataset - 40K GPT4V-generated video-caption pairs - 40K differential sliding-window captioning conversations - 40K prompt-to-caption textual data ## Paper arxiv.org/abs/2406.04325
Yntec/photoMovieRealistic
Yntec
"2024-04-17T20:43:16Z"
43,664
18
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "MagicArt35", "Photorealistic", "cinestill", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-05T07:31:21Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - MagicArt35 - Photorealistic - cinestill --- # Photo Movie Realistic Original page: https://civitai.com/models/95413/photo-movie-realistic
Helsinki-NLP/opus-mt-vi-en
Helsinki-NLP
"2023-08-16T12:08:32Z"
43,508
8
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - vi - en tags: - translation license: apache-2.0 --- ### vie-eng * source group: Vietnamese * target group: English * OPUS readme: [vie-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md) * model: transformer-align * source language(s): vie vie_Hani * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.eng | 42.8 | 0.608 | ### System Info: - hf_name: vie-eng - source_languages: vie - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'en'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: eng - short_pair: vi-en - chrF2_score: 0.608 - bleu: 42.8 - brevity_penalty: 0.955 - ref_len: 20241.0 - src_name: Vietnamese - tgt_name: English - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: en - prefer_old: False - long_pair: vie-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
beomi/Llama-3-Open-Ko-8B
beomi
"2024-05-20T05:31:06Z"
43,423
89
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "llama-3-ko", "conversational", "en", "ko", "arxiv:2310.04799", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-22T09:53:13Z"
--- language: - en - ko license: other tags: - facebook - meta - pytorch - llama - llama-3 - llama-3-ko pipeline_tag: text-generation license_name: llama3 license_link: LICENSE --- > Update @ 2024.05.20: Re-Upload RoPE fixed model > Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) > Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ## Model Details **Llama-3-Open-Ko-8B** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. **Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Meta Llama-3** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-Open-Ko comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-Open-Ko </td> <td rowspan="2" >Same as *Open-Solar-Ko Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >17.7B+ </td> <td>Jun, 2023 </td> </tr> </table> *You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus **Model Release Date** 2024.04.24. **Status** This is a static model trained on an offline dataset. **License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Benchmark Scores - vllm (pretrained=beomi/Llama-3-Open-Ko-8B,revision=081e85a,tensor_parallel_size=1,dtype=bfloat16,data_parallel_size=2,gpu_memory_utilization=0.8), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |----------------------------------------------------------|-------|------|-----:|-----------|-----:|---|------| |haerae |N/A |none | 5|acc |0.6801|± |0.0138| | | |none | 5|acc_norm |0.6801|± |0.0138| | - haerae_general_knowledge | 1|none | 5|acc |0.4375|± |0.0375| | | |none | 5|acc_norm |0.4375|± |0.0375| | - haerae_history | 1|none | 5|acc |0.7340|± |0.0323| | | |none | 5|acc_norm |0.7340|± |0.0323| | - haerae_loan_word | 1|none | 5|acc |0.7870|± |0.0316| | | |none | 5|acc_norm |0.7870|± |0.0316| | - haerae_rare_word | 1|none | 5|acc |0.7012|± |0.0228| | | |none | 5|acc_norm |0.7012|± |0.0228| | - haerae_standard_nomenclature | 1|none | 5|acc |0.7190|± |0.0365| | | |none | 5|acc_norm |0.7190|± |0.0365| |kmmlu_direct |N/A |none | 5|exact_match|0.4054|± |0.0026| | - kmmlu_direct_accounting | 2|none | 5|exact_match|0.3600|± |0.0482| | - kmmlu_direct_agricultural_sciences | 2|none | 5|exact_match|0.3130|± |0.0147| | - kmmlu_direct_aviation_engineering_and_maintenance | 2|none | 5|exact_match|0.3690|± |0.0153| | - kmmlu_direct_biology | 2|none | 5|exact_match|0.3330|± |0.0149| | - kmmlu_direct_chemical_engineering | 2|none | 5|exact_match|0.4190|± |0.0156| | - kmmlu_direct_chemistry | 2|none | 5|exact_match|0.3833|± |0.0199| | - kmmlu_direct_civil_engineering | 2|none | 5|exact_match|0.3870|± |0.0154| | - kmmlu_direct_computer_science | 2|none | 5|exact_match|0.6340|± |0.0152| | - kmmlu_direct_construction | 2|none | 5|exact_match|0.3340|± |0.0149| | - kmmlu_direct_criminal_law | 2|none | 5|exact_match|0.2850|± |0.0320| | - kmmlu_direct_ecology | 2|none | 5|exact_match|0.4210|± |0.0156| | - kmmlu_direct_economics | 2|none | 5|exact_match|0.4077|± |0.0433| | - kmmlu_direct_education | 2|none | 5|exact_match|0.5000|± |0.0503| | - kmmlu_direct_electrical_engineering | 2|none | 5|exact_match|0.3620|± |0.0152| | - kmmlu_direct_electronics_engineering | 2|none | 5|exact_match|0.4790|± |0.0158| | - kmmlu_direct_energy_management | 2|none | 5|exact_match|0.3110|± |0.0146| | - kmmlu_direct_environmental_science | 2|none | 5|exact_match|0.3210|± |0.0148| | - kmmlu_direct_fashion | 2|none | 5|exact_match|0.4190|± |0.0156| | - kmmlu_direct_food_processing | 2|none | 5|exact_match|0.3600|± |0.0152| | - kmmlu_direct_gas_technology_and_engineering | 2|none | 5|exact_match|0.3320|± |0.0149| | - kmmlu_direct_geomatics | 2|none | 5|exact_match|0.3640|± |0.0152| | - kmmlu_direct_health | 2|none | 5|exact_match|0.5100|± |0.0502| | - kmmlu_direct_industrial_engineer | 2|none | 5|exact_match|0.3970|± |0.0155| | - kmmlu_direct_information_technology | 2|none | 5|exact_match|0.5720|± |0.0157| | - kmmlu_direct_interior_architecture_and_design | 2|none | 5|exact_match|0.4740|± |0.0158| | - kmmlu_direct_korean_history | 2|none | 5|exact_match|0.2700|± |0.0446| | - kmmlu_direct_law | 2|none | 5|exact_match|0.3990|± |0.0155| | - kmmlu_direct_machine_design_and_manufacturing | 2|none | 5|exact_match|0.4080|± |0.0155| | - kmmlu_direct_management | 2|none | 5|exact_match|0.4660|± |0.0158| | - kmmlu_direct_maritime_engineering | 2|none | 5|exact_match|0.4417|± |0.0203| | - kmmlu_direct_marketing | 2|none | 5|exact_match|0.6720|± |0.0149| | - kmmlu_direct_materials_engineering | 2|none | 5|exact_match|0.4130|± |0.0156| | - kmmlu_direct_math | 2|none | 5|exact_match|0.2567|± |0.0253| | - kmmlu_direct_mechanical_engineering | 2|none | 5|exact_match|0.3800|± |0.0154| | - kmmlu_direct_nondestructive_testing | 2|none | 5|exact_match|0.3890|± |0.0154| | - kmmlu_direct_patent | 2|none | 5|exact_match|0.2700|± |0.0446| | - kmmlu_direct_political_science_and_sociology | 2|none | 5|exact_match|0.4433|± |0.0287| | - kmmlu_direct_psychology | 2|none | 5|exact_match|0.3620|± |0.0152| | - kmmlu_direct_public_safety | 2|none | 5|exact_match|0.3200|± |0.0148| | - kmmlu_direct_railway_and_automotive_engineering | 2|none | 5|exact_match|0.3200|± |0.0148| | - kmmlu_direct_real_estate | 2|none | 5|exact_match|0.3650|± |0.0341| | - kmmlu_direct_refrigerating_machinery | 2|none | 5|exact_match|0.3210|± |0.0148| | - kmmlu_direct_social_welfare | 2|none | 5|exact_match|0.4500|± |0.0157| | - kmmlu_direct_taxation | 2|none | 5|exact_match|0.3550|± |0.0339| | - kmmlu_direct_telecommunications_and_wireless_technology| 2|none | 5|exact_match|0.5490|± |0.0157| |kobest_boolq | 1|none | 5|acc |0.7984|± |0.0107| | | |none | 5|f1 |0.7961|± |N/A | |kobest_copa | 1|none | 5|acc |0.8150|± |0.0123| | | |none | 5|f1 |0.8148|± |N/A | |kobest_hellaswag | 1|none | 5|acc |0.4800|± |0.0224| | | |none | 5|f1 |0.4771|± |N/A | | | |none | 5|acc_norm |0.6120|± |0.0218| |kobest_sentineg | 1|none | 5|acc |0.9597|± |0.0099| | | |none | 5|f1 |0.9597|± |N/A | |haerae |N/A |none | 5|acc |0.6801|± |0.0138| | | |none | 5|acc_norm |0.6801|± |0.0138| |kmmlu_direct|N/A |none | 5|exact_match|0.4054|± |0.0026| ## Citation instructions **Llama-3-Open-Ko** ``` @article{llama3openko, title={Llama-3-Open-Ko}, author={L, Junbum}, year={2024}, url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
bartowski/L3-Aethora-15B-V2-GGUF
bartowski
"2024-06-27T12:23:22Z"
43,204
3
transformers
[ "transformers", "gguf", "text-generation", "en", "dataset:TheSkullery/Aether-Lite-v1.8.1", "base_model:elinas/Llama-3-15B-Instruct-zeroed", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-27T11:41:33Z"
--- license: cc-by-sa-4.0 datasets: - TheSkullery/Aether-Lite-v1.8.1 language: - en base_model: - elinas/Llama-3-15B-Instruct-zeroed library_name: transformers quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of L3-Aethora-15B-V2 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization. Original model: https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [L3-Aethora-15B-V2-Q8_0_L.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q8_1.gguf) | Q8_0_L | 16.94GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. | | [L3-Aethora-15B-V2-Q8_0.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q8_0.gguf) | Q8_0 | 15.95GB | Extremely high quality, generally unneeded but max available quant. | | [L3-Aethora-15B-V2-Q6_K_L.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q6_K_L.gguf) | Q6_K_L | 13.56GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. | | [L3-Aethora-15B-V2-Q6_K.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q6_K.gguf) | Q6_K | 12.32GB | Very high quality, near perfect, *recommended*. | | [L3-Aethora-15B-V2-Q5_K_L.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q5_K_L.gguf) | Q5_K_L | 11.97GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. | | [L3-Aethora-15B-V2-Q5_K_M.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q5_K_M.gguf) | Q5_K_M | 10.66GB | High quality, *recommended*. | | [L3-Aethora-15B-V2-Q5_K_S.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q5_K_S.gguf) | Q5_K_S | 10.39GB | High quality, *recommended*. | | [L3-Aethora-15B-V2-Q4_K_L.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q4_K_L.gguf) | Q4_K_L | 10.48GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. | | [L3-Aethora-15B-V2-Q4_K_M.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q4_K_M.gguf) | Q4_K_M | 9.10GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [L3-Aethora-15B-V2-Q4_K_S.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q4_K_S.gguf) | Q4_K_S | 8.64GB | Slightly lower quality with more space savings, *recommended*. | | [L3-Aethora-15B-V2-IQ4_XS.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ4_XS.gguf) | IQ4_XS | 8.17GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [L3-Aethora-15B-V2-Q3_K_XL.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF//main/L3-Aethora-15B-V2-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. | | [L3-Aethora-15B-V2-Q3_K_L.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q3_K_L.gguf) | Q3_K_L | 7.97GB | Lower quality but usable, good for low RAM availability. | | [L3-Aethora-15B-V2-Q3_K_M.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q3_K_M.gguf) | Q3_K_M | 7.37GB | Even lower quality. | | [L3-Aethora-15B-V2-IQ3_M.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ3_M.gguf) | IQ3_M | 6.90GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [L3-Aethora-15B-V2-Q3_K_S.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q3_K_S.gguf) | Q3_K_S | 6.66GB | Low quality, not recommended. | | [L3-Aethora-15B-V2-IQ3_XS.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ3_XS.gguf) | IQ3_XS | 6.37GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [L3-Aethora-15B-V2-IQ3_XXS.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ3_XXS.gguf) | IQ3_XXS | 5.95GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [L3-Aethora-15B-V2-Q2_K.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-Q2_K.gguf) | Q2_K | 5.74GB | Very low quality but surprisingly usable. | | [L3-Aethora-15B-V2-IQ2_M.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ2_M.gguf) | IQ2_M | 5.30GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [L3-Aethora-15B-V2-IQ2_S.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ2_S.gguf) | IQ2_S | 4.92GB | Very low quality, uses SOTA techniques to be usable. | | [L3-Aethora-15B-V2-IQ2_XS.gguf](https://huggingface.co/bartowski/L3-Aethora-15B-V2-GGUF/blob/main/L3-Aethora-15B-V2-IQ2_XS.gguf) | IQ2_XS | 4.67GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/L3-Aethora-15B-V2-GGUF --include "L3-Aethora-15B-V2-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/L3-Aethora-15B-V2-GGUF --include "L3-Aethora-15B-V2-Q8_0.gguf/*" --local-dir L3-Aethora-15B-V2-Q8_0 ``` You can either specify a new local-dir (L3-Aethora-15B-V2-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
avsolatorio/GIST-large-Embedding-v0
avsolatorio
"2024-02-28T00:34:23Z"
43,185
10
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "mteb", "sentence-similarity", "en", "arxiv:2402.16829", "arxiv:2212.09741", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-02-14T18:26:25Z"
--- language: - en library_name: sentence-transformers license: mit pipeline_tag: sentence-similarity tags: - feature-extraction - mteb - sentence-similarity - sentence-transformers model-index: - name: GIST-large-Embedding-v0 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.5820895522388 - type: ap value: 38.32190121241783 - type: f1 value: 69.44777155231054 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.40514999999998 - type: ap value: 90.2011565132406 - type: f1 value: 93.39486246843605 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.05999999999999 - type: f1 value: 48.58702718571088 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 38.407000000000004 - type: map_at_10 value: 54.822 - type: map_at_100 value: 55.387 - type: map_at_1000 value: 55.388999999999996 - type: map_at_3 value: 50.308 - type: map_at_5 value: 53.199 - type: mrr_at_1 value: 39.900000000000006 - type: mrr_at_10 value: 55.385 - type: mrr_at_100 value: 55.936 - type: mrr_at_1000 value: 55.93900000000001 - type: mrr_at_3 value: 50.853 - type: mrr_at_5 value: 53.738 - type: ndcg_at_1 value: 38.407000000000004 - type: ndcg_at_10 value: 63.38 - type: ndcg_at_100 value: 65.52900000000001 - type: ndcg_at_1000 value: 65.58800000000001 - type: ndcg_at_3 value: 54.26 - type: ndcg_at_5 value: 59.488 - type: precision_at_1 value: 38.407000000000004 - type: precision_at_10 value: 9.04 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 21.906 - type: precision_at_5 value: 15.690000000000001 - type: recall_at_1 value: 38.407000000000004 - type: recall_at_10 value: 90.398 - type: recall_at_100 value: 99.21799999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 65.718 - type: recall_at_5 value: 78.45 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.49766333679089 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.57731111438094 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.70120072857361 - type: mrr value: 77.86714593501297 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 90.73821860690765 - type: cos_sim_spearman value: 89.17070651383446 - type: euclidean_pearson value: 88.28303958293029 - type: euclidean_spearman value: 88.81889126856979 - type: manhattan_pearson value: 88.09080621828731 - type: manhattan_spearman value: 88.55924679817751 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 88.10064935064933 - type: f1 value: 88.08460758973867 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.338228337929976 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.179156232378226 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.440999999999995 - type: map_at_10 value: 45.495000000000005 - type: map_at_100 value: 47.132000000000005 - type: map_at_1000 value: 47.253 - type: map_at_3 value: 41.766 - type: map_at_5 value: 43.873 - type: mrr_at_1 value: 40.772999999999996 - type: mrr_at_10 value: 51.627 - type: mrr_at_100 value: 52.364 - type: mrr_at_1000 value: 52.397000000000006 - type: mrr_at_3 value: 48.951 - type: mrr_at_5 value: 50.746 - type: ndcg_at_1 value: 40.772999999999996 - type: ndcg_at_10 value: 52.306 - type: ndcg_at_100 value: 57.753 - type: ndcg_at_1000 value: 59.36900000000001 - type: ndcg_at_3 value: 47.177 - type: ndcg_at_5 value: 49.71 - type: precision_at_1 value: 40.772999999999996 - type: precision_at_10 value: 10.129000000000001 - type: precision_at_100 value: 1.617 - type: precision_at_1000 value: 0.208 - type: precision_at_3 value: 22.985 - type: precision_at_5 value: 16.652 - type: recall_at_1 value: 33.440999999999995 - type: recall_at_10 value: 65.121 - type: recall_at_100 value: 87.55199999999999 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 49.958999999999996 - type: recall_at_5 value: 57.14900000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.126 - type: map_at_10 value: 42.856 - type: map_at_100 value: 44.134 - type: map_at_1000 value: 44.274 - type: map_at_3 value: 39.594 - type: map_at_5 value: 41.504999999999995 - type: mrr_at_1 value: 40.127 - type: mrr_at_10 value: 48.736000000000004 - type: mrr_at_100 value: 49.303999999999995 - type: mrr_at_1000 value: 49.356 - type: mrr_at_3 value: 46.263 - type: mrr_at_5 value: 47.878 - type: ndcg_at_1 value: 40.127 - type: ndcg_at_10 value: 48.695 - type: ndcg_at_100 value: 52.846000000000004 - type: ndcg_at_1000 value: 54.964 - type: ndcg_at_3 value: 44.275 - type: ndcg_at_5 value: 46.54 - type: precision_at_1 value: 40.127 - type: precision_at_10 value: 9.229 - type: precision_at_100 value: 1.473 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.444 - type: precision_at_5 value: 15.389 - type: recall_at_1 value: 32.126 - type: recall_at_10 value: 58.971 - type: recall_at_100 value: 76.115 - type: recall_at_1000 value: 89.556 - type: recall_at_3 value: 45.891 - type: recall_at_5 value: 52.242 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 41.312 - type: map_at_10 value: 54.510000000000005 - type: map_at_100 value: 55.544000000000004 - type: map_at_1000 value: 55.593 - type: map_at_3 value: 50.859 - type: map_at_5 value: 52.839999999999996 - type: mrr_at_1 value: 47.147 - type: mrr_at_10 value: 57.678 - type: mrr_at_100 value: 58.287 - type: mrr_at_1000 value: 58.312 - type: mrr_at_3 value: 55.025999999999996 - type: mrr_at_5 value: 56.55 - type: ndcg_at_1 value: 47.147 - type: ndcg_at_10 value: 60.672000000000004 - type: ndcg_at_100 value: 64.411 - type: ndcg_at_1000 value: 65.35499999999999 - type: ndcg_at_3 value: 54.643 - type: ndcg_at_5 value: 57.461 - type: precision_at_1 value: 47.147 - type: precision_at_10 value: 9.881 - type: precision_at_100 value: 1.27 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 24.556 - type: precision_at_5 value: 16.814999999999998 - type: recall_at_1 value: 41.312 - type: recall_at_10 value: 75.62299999999999 - type: recall_at_100 value: 91.388 - type: recall_at_1000 value: 98.08 - type: recall_at_3 value: 59.40299999999999 - type: recall_at_5 value: 66.43900000000001 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.609 - type: map_at_10 value: 37.614 - type: map_at_100 value: 38.584 - type: map_at_1000 value: 38.652 - type: map_at_3 value: 34.731 - type: map_at_5 value: 36.308 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.829 - type: mrr_at_100 value: 40.659 - type: mrr_at_1000 value: 40.709 - type: mrr_at_3 value: 37.269000000000005 - type: mrr_at_5 value: 38.625 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.082 - type: ndcg_at_100 value: 47.857 - type: ndcg_at_1000 value: 49.612 - type: ndcg_at_3 value: 37.578 - type: ndcg_at_5 value: 40.135 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.678000000000001 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 16.045 - type: precision_at_5 value: 11.073 - type: recall_at_1 value: 27.609 - type: recall_at_10 value: 57.718 - type: recall_at_100 value: 79.768 - type: recall_at_1000 value: 92.868 - type: recall_at_3 value: 42.876 - type: recall_at_5 value: 49.104 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.071 - type: map_at_10 value: 27.471 - type: map_at_100 value: 28.71 - type: map_at_1000 value: 28.833 - type: map_at_3 value: 24.698 - type: map_at_5 value: 26.461000000000002 - type: mrr_at_1 value: 22.387999999999998 - type: mrr_at_10 value: 32.522 - type: mrr_at_100 value: 33.393 - type: mrr_at_1000 value: 33.455 - type: mrr_at_3 value: 29.830000000000002 - type: mrr_at_5 value: 31.472 - type: ndcg_at_1 value: 22.387999999999998 - type: ndcg_at_10 value: 33.278999999999996 - type: ndcg_at_100 value: 39.043 - type: ndcg_at_1000 value: 41.763 - type: ndcg_at_3 value: 28.310999999999996 - type: ndcg_at_5 value: 31.007 - type: precision_at_1 value: 22.387999999999998 - type: precision_at_10 value: 6.157 - type: precision_at_100 value: 1.042 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 13.972000000000001 - type: precision_at_5 value: 10.274 - type: recall_at_1 value: 18.071 - type: recall_at_10 value: 46.025 - type: recall_at_100 value: 71.153 - type: recall_at_1000 value: 90.232 - type: recall_at_3 value: 32.311 - type: recall_at_5 value: 39.296 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.813000000000002 - type: map_at_10 value: 42.594 - type: map_at_100 value: 43.949 - type: map_at_1000 value: 44.052 - type: map_at_3 value: 39.1 - type: map_at_5 value: 41.111 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 48.06 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.946 - type: mrr_at_3 value: 45.509 - type: mrr_at_5 value: 47.073 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 48.882 - type: ndcg_at_100 value: 54.330999999999996 - type: ndcg_at_1000 value: 56.120999999999995 - type: ndcg_at_3 value: 43.529 - type: ndcg_at_5 value: 46.217999999999996 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.845 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 20.757 - type: precision_at_5 value: 14.802999999999999 - type: recall_at_1 value: 30.813000000000002 - type: recall_at_10 value: 61.895999999999994 - type: recall_at_100 value: 84.513 - type: recall_at_1000 value: 95.817 - type: recall_at_3 value: 47.099000000000004 - type: recall_at_5 value: 54.031 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.735999999999997 - type: map_at_10 value: 36.799 - type: map_at_100 value: 38.246 - type: map_at_1000 value: 38.353 - type: map_at_3 value: 33.133 - type: map_at_5 value: 34.954 - type: mrr_at_1 value: 31.849 - type: mrr_at_10 value: 41.928 - type: mrr_at_100 value: 42.846000000000004 - type: mrr_at_1000 value: 42.894 - type: mrr_at_3 value: 39.117000000000004 - type: mrr_at_5 value: 40.521 - type: ndcg_at_1 value: 31.849 - type: ndcg_at_10 value: 43.143 - type: ndcg_at_100 value: 48.963 - type: ndcg_at_1000 value: 51.041000000000004 - type: ndcg_at_3 value: 37.218 - type: ndcg_at_5 value: 39.542 - type: precision_at_1 value: 31.849 - type: precision_at_10 value: 8.231 - type: precision_at_100 value: 1.277 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 18.037 - type: precision_at_5 value: 12.945 - type: recall_at_1 value: 25.735999999999997 - type: recall_at_10 value: 56.735 - type: recall_at_100 value: 81.04 - type: recall_at_1000 value: 94.845 - type: recall_at_3 value: 40.239999999999995 - type: recall_at_5 value: 46.378 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.580333333333336 - type: map_at_10 value: 37.70558333333334 - type: map_at_100 value: 38.94941666666667 - type: map_at_1000 value: 39.062083333333334 - type: map_at_3 value: 34.63333333333334 - type: map_at_5 value: 36.35241666666666 - type: mrr_at_1 value: 32.64866666666667 - type: mrr_at_10 value: 42.018499999999996 - type: mrr_at_100 value: 42.83391666666666 - type: mrr_at_1000 value: 42.884166666666665 - type: mrr_at_3 value: 39.476499999999994 - type: mrr_at_5 value: 40.96983333333334 - type: ndcg_at_1 value: 32.64866666666667 - type: ndcg_at_10 value: 43.43866666666667 - type: ndcg_at_100 value: 48.569833333333335 - type: ndcg_at_1000 value: 50.6495 - type: ndcg_at_3 value: 38.327166666666656 - type: ndcg_at_5 value: 40.76941666666667 - type: precision_at_1 value: 32.64866666666667 - type: precision_at_10 value: 7.652333333333332 - type: precision_at_100 value: 1.2066666666666666 - type: precision_at_1000 value: 0.15841666666666668 - type: precision_at_3 value: 17.75108333333333 - type: precision_at_5 value: 12.641916666666669 - type: recall_at_1 value: 27.580333333333336 - type: recall_at_10 value: 56.02591666666667 - type: recall_at_100 value: 78.317 - type: recall_at_1000 value: 92.52608333333332 - type: recall_at_3 value: 41.84283333333333 - type: recall_at_5 value: 48.105666666666664 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.876 - type: map_at_10 value: 34.521 - type: map_at_100 value: 35.581 - type: map_at_1000 value: 35.674 - type: map_at_3 value: 32.501000000000005 - type: map_at_5 value: 33.602 - type: mrr_at_1 value: 31.441999999999997 - type: mrr_at_10 value: 37.669999999999995 - type: mrr_at_100 value: 38.523 - type: mrr_at_1000 value: 38.59 - type: mrr_at_3 value: 35.762 - type: mrr_at_5 value: 36.812 - type: ndcg_at_1 value: 31.441999999999997 - type: ndcg_at_10 value: 38.46 - type: ndcg_at_100 value: 43.479 - type: ndcg_at_1000 value: 45.858 - type: ndcg_at_3 value: 34.668 - type: ndcg_at_5 value: 36.416 - type: precision_at_1 value: 31.441999999999997 - type: precision_at_10 value: 5.782 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 14.417 - type: precision_at_5 value: 9.876999999999999 - type: recall_at_1 value: 27.876 - type: recall_at_10 value: 47.556 - type: recall_at_100 value: 70.39699999999999 - type: recall_at_1000 value: 87.969 - type: recall_at_3 value: 37.226 - type: recall_at_5 value: 41.43 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.854000000000003 - type: map_at_10 value: 26.632 - type: map_at_100 value: 27.849 - type: map_at_1000 value: 27.977 - type: map_at_3 value: 24.089 - type: map_at_5 value: 25.477 - type: mrr_at_1 value: 22.987 - type: mrr_at_10 value: 30.781999999999996 - type: mrr_at_100 value: 31.746000000000002 - type: mrr_at_1000 value: 31.818 - type: mrr_at_3 value: 28.43 - type: mrr_at_5 value: 29.791 - type: ndcg_at_1 value: 22.987 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.32 - type: ndcg_at_1000 value: 40.072 - type: ndcg_at_3 value: 27.058 - type: ndcg_at_5 value: 29.137999999999998 - type: precision_at_1 value: 22.987 - type: precision_at_10 value: 5.76 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.14400000000000002 - type: precision_at_3 value: 12.767000000000001 - type: precision_at_5 value: 9.257 - type: recall_at_1 value: 18.854000000000003 - type: recall_at_10 value: 42.349 - type: recall_at_100 value: 68.15299999999999 - type: recall_at_1000 value: 87.44 - type: recall_at_3 value: 29.715999999999998 - type: recall_at_5 value: 35.085 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.094 - type: map_at_10 value: 38.22 - type: map_at_100 value: 39.352 - type: map_at_1000 value: 39.452 - type: map_at_3 value: 35.339 - type: map_at_5 value: 36.78 - type: mrr_at_1 value: 33.022 - type: mrr_at_10 value: 42.466 - type: mrr_at_100 value: 43.3 - type: mrr_at_1000 value: 43.356 - type: mrr_at_3 value: 40.159 - type: mrr_at_5 value: 41.272999999999996 - type: ndcg_at_1 value: 33.022 - type: ndcg_at_10 value: 43.976 - type: ndcg_at_100 value: 49.008 - type: ndcg_at_1000 value: 51.154999999999994 - type: ndcg_at_3 value: 38.891 - type: ndcg_at_5 value: 40.897 - type: precision_at_1 value: 33.022 - type: precision_at_10 value: 7.396999999999999 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 17.724 - type: precision_at_5 value: 12.239 - type: recall_at_1 value: 28.094 - type: recall_at_10 value: 57.162 - type: recall_at_100 value: 78.636 - type: recall_at_1000 value: 93.376 - type: recall_at_3 value: 43.328 - type: recall_at_5 value: 48.252 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.937 - type: map_at_10 value: 34.82 - type: map_at_100 value: 36.405 - type: map_at_1000 value: 36.626 - type: map_at_3 value: 31.548 - type: map_at_5 value: 33.355000000000004 - type: mrr_at_1 value: 30.435000000000002 - type: mrr_at_10 value: 39.946 - type: mrr_at_100 value: 40.873 - type: mrr_at_1000 value: 40.910000000000004 - type: mrr_at_3 value: 37.088 - type: mrr_at_5 value: 38.808 - type: ndcg_at_1 value: 30.435000000000002 - type: ndcg_at_10 value: 41.25 - type: ndcg_at_100 value: 47.229 - type: ndcg_at_1000 value: 49.395 - type: ndcg_at_3 value: 35.801 - type: ndcg_at_5 value: 38.457 - type: precision_at_1 value: 30.435000000000002 - type: precision_at_10 value: 8.083 - type: precision_at_100 value: 1.601 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 17.061999999999998 - type: precision_at_5 value: 12.767000000000001 - type: recall_at_1 value: 24.937 - type: recall_at_10 value: 53.905 - type: recall_at_100 value: 80.607 - type: recall_at_1000 value: 93.728 - type: recall_at_3 value: 38.446000000000005 - type: recall_at_5 value: 45.188 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.095000000000002 - type: map_at_10 value: 30.935000000000002 - type: map_at_100 value: 31.907000000000004 - type: map_at_1000 value: 32.006 - type: map_at_3 value: 28.242 - type: map_at_5 value: 29.963 - type: mrr_at_1 value: 23.845 - type: mrr_at_10 value: 32.978 - type: mrr_at_100 value: 33.802 - type: mrr_at_1000 value: 33.867000000000004 - type: mrr_at_3 value: 30.314000000000004 - type: mrr_at_5 value: 32.089 - type: ndcg_at_1 value: 23.845 - type: ndcg_at_10 value: 35.934 - type: ndcg_at_100 value: 40.598 - type: ndcg_at_1000 value: 43.089 - type: ndcg_at_3 value: 30.776999999999997 - type: ndcg_at_5 value: 33.711999999999996 - type: precision_at_1 value: 23.845 - type: precision_at_10 value: 5.656 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 13.247 - type: precision_at_5 value: 9.612 - type: recall_at_1 value: 22.095000000000002 - type: recall_at_10 value: 49.25 - type: recall_at_100 value: 70.482 - type: recall_at_1000 value: 88.98899999999999 - type: recall_at_3 value: 35.619 - type: recall_at_5 value: 42.674 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 14.154 - type: map_at_10 value: 24.654999999999998 - type: map_at_100 value: 26.723999999999997 - type: map_at_1000 value: 26.912000000000003 - type: map_at_3 value: 20.4 - type: map_at_5 value: 22.477 - type: mrr_at_1 value: 32.117000000000004 - type: mrr_at_10 value: 44.590999999999994 - type: mrr_at_100 value: 45.425 - type: mrr_at_1000 value: 45.456 - type: mrr_at_3 value: 41.281 - type: mrr_at_5 value: 43.219 - type: ndcg_at_1 value: 32.117000000000004 - type: ndcg_at_10 value: 33.994 - type: ndcg_at_100 value: 41.438 - type: ndcg_at_1000 value: 44.611000000000004 - type: ndcg_at_3 value: 27.816000000000003 - type: ndcg_at_5 value: 29.816 - type: precision_at_1 value: 32.117000000000004 - type: precision_at_10 value: 10.756 - type: precision_at_100 value: 1.8679999999999999 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 20.803 - type: precision_at_5 value: 15.987000000000002 - type: recall_at_1 value: 14.154 - type: recall_at_10 value: 40.489999999999995 - type: recall_at_100 value: 65.635 - type: recall_at_1000 value: 83.276 - type: recall_at_3 value: 25.241000000000003 - type: recall_at_5 value: 31.211 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.332 - type: map_at_10 value: 20.462 - type: map_at_100 value: 29.473 - type: map_at_1000 value: 31.215 - type: map_at_3 value: 14.466999999999999 - type: map_at_5 value: 16.922 - type: mrr_at_1 value: 69.5 - type: mrr_at_10 value: 77.039 - type: mrr_at_100 value: 77.265 - type: mrr_at_1000 value: 77.271 - type: mrr_at_3 value: 75.5 - type: mrr_at_5 value: 76.4 - type: ndcg_at_1 value: 57.125 - type: ndcg_at_10 value: 42.958 - type: ndcg_at_100 value: 48.396 - type: ndcg_at_1000 value: 55.897 - type: ndcg_at_3 value: 47.188 - type: ndcg_at_5 value: 44.376 - type: precision_at_1 value: 69.5 - type: precision_at_10 value: 34.5 - type: precision_at_100 value: 11.18 - type: precision_at_1000 value: 2.13 - type: precision_at_3 value: 51.083 - type: precision_at_5 value: 43.1 - type: recall_at_1 value: 9.332 - type: recall_at_10 value: 26.422 - type: recall_at_100 value: 56.098000000000006 - type: recall_at_1000 value: 79.66 - type: recall_at_3 value: 15.703 - type: recall_at_5 value: 19.644000000000002 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 54.72 - type: f1 value: 49.67819606587526 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 74.97 - type: map_at_10 value: 82.956 - type: map_at_100 value: 83.193 - type: map_at_1000 value: 83.208 - type: map_at_3 value: 81.837 - type: map_at_5 value: 82.57 - type: mrr_at_1 value: 80.783 - type: mrr_at_10 value: 87.546 - type: mrr_at_100 value: 87.627 - type: mrr_at_1000 value: 87.63 - type: mrr_at_3 value: 86.79400000000001 - type: mrr_at_5 value: 87.32799999999999 - type: ndcg_at_1 value: 80.783 - type: ndcg_at_10 value: 86.54899999999999 - type: ndcg_at_100 value: 87.355 - type: ndcg_at_1000 value: 87.629 - type: ndcg_at_3 value: 84.82 - type: ndcg_at_5 value: 85.83800000000001 - type: precision_at_1 value: 80.783 - type: precision_at_10 value: 10.327 - type: precision_at_100 value: 1.094 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.218 - type: precision_at_5 value: 20.012 - type: recall_at_1 value: 74.97 - type: recall_at_10 value: 93.072 - type: recall_at_100 value: 96.218 - type: recall_at_1000 value: 97.991 - type: recall_at_3 value: 88.357 - type: recall_at_5 value: 90.983 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 21.12 - type: map_at_10 value: 35.908 - type: map_at_100 value: 37.895 - type: map_at_1000 value: 38.068000000000005 - type: map_at_3 value: 31.189 - type: map_at_5 value: 33.908 - type: mrr_at_1 value: 42.901 - type: mrr_at_10 value: 52.578 - type: mrr_at_100 value: 53.308 - type: mrr_at_1000 value: 53.342 - type: mrr_at_3 value: 50.385999999999996 - type: mrr_at_5 value: 51.62799999999999 - type: ndcg_at_1 value: 42.901 - type: ndcg_at_10 value: 44.302 - type: ndcg_at_100 value: 51.132999999999996 - type: ndcg_at_1000 value: 53.848 - type: ndcg_at_3 value: 40.464 - type: ndcg_at_5 value: 41.743 - type: precision_at_1 value: 42.901 - type: precision_at_10 value: 12.423 - type: precision_at_100 value: 1.968 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 27.622999999999998 - type: precision_at_5 value: 20.278 - type: recall_at_1 value: 21.12 - type: recall_at_10 value: 52.091 - type: recall_at_100 value: 77.062 - type: recall_at_1000 value: 93.082 - type: recall_at_3 value: 37.223 - type: recall_at_5 value: 43.826 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 38.940000000000005 - type: map_at_10 value: 62.239999999999995 - type: map_at_100 value: 63.141000000000005 - type: map_at_1000 value: 63.205999999999996 - type: map_at_3 value: 58.738 - type: map_at_5 value: 60.924 - type: mrr_at_1 value: 77.88000000000001 - type: mrr_at_10 value: 83.7 - type: mrr_at_100 value: 83.882 - type: mrr_at_1000 value: 83.889 - type: mrr_at_3 value: 82.748 - type: mrr_at_5 value: 83.381 - type: ndcg_at_1 value: 77.88000000000001 - type: ndcg_at_10 value: 70.462 - type: ndcg_at_100 value: 73.564 - type: ndcg_at_1000 value: 74.78099999999999 - type: ndcg_at_3 value: 65.524 - type: ndcg_at_5 value: 68.282 - type: precision_at_1 value: 77.88000000000001 - type: precision_at_10 value: 14.81 - type: precision_at_100 value: 1.7229999999999999 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 42.083999999999996 - type: precision_at_5 value: 27.43 - type: recall_at_1 value: 38.940000000000005 - type: recall_at_10 value: 74.051 - type: recall_at_100 value: 86.158 - type: recall_at_1000 value: 94.146 - type: recall_at_3 value: 63.126000000000005 - type: recall_at_5 value: 68.575 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 91.23440000000001 - type: ap value: 87.33490392265892 - type: f1 value: 91.21374626021836 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 22.137999999999998 - type: map_at_10 value: 34.471000000000004 - type: map_at_100 value: 35.634 - type: map_at_1000 value: 35.685 - type: map_at_3 value: 30.587999999999997 - type: map_at_5 value: 32.812999999999995 - type: mrr_at_1 value: 22.736 - type: mrr_at_10 value: 35.092 - type: mrr_at_100 value: 36.193999999999996 - type: mrr_at_1000 value: 36.238 - type: mrr_at_3 value: 31.28 - type: mrr_at_5 value: 33.498 - type: ndcg_at_1 value: 22.736 - type: ndcg_at_10 value: 41.388999999999996 - type: ndcg_at_100 value: 46.967999999999996 - type: ndcg_at_1000 value: 48.178 - type: ndcg_at_3 value: 33.503 - type: ndcg_at_5 value: 37.484 - type: precision_at_1 value: 22.736 - type: precision_at_10 value: 6.54 - type: precision_at_100 value: 0.9339999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.249999999999998 - type: precision_at_5 value: 10.562000000000001 - type: recall_at_1 value: 22.137999999999998 - type: recall_at_10 value: 62.629999999999995 - type: recall_at_100 value: 88.375 - type: recall_at_1000 value: 97.529 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.808 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.25079799361606 - type: f1 value: 95.00726023695032 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 78.23757409940721 - type: f1 value: 58.534958803195714 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.20040349697378 - type: f1 value: 74.31261149784696 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35104236718227 - type: f1 value: 79.7373049864316 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.478828180753126 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.25696147904426 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.82488548405117 - type: mrr value: 34.066706809031096 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.557 - type: map_at_10 value: 15.055 - type: map_at_100 value: 19.575 - type: map_at_1000 value: 21.267 - type: map_at_3 value: 10.86 - type: map_at_5 value: 12.83 - type: mrr_at_1 value: 50.464 - type: mrr_at_10 value: 59.050999999999995 - type: mrr_at_100 value: 59.436 - type: mrr_at_1000 value: 59.476 - type: mrr_at_3 value: 56.811 - type: mrr_at_5 value: 58.08 - type: ndcg_at_1 value: 47.988 - type: ndcg_at_10 value: 38.645 - type: ndcg_at_100 value: 36.339 - type: ndcg_at_1000 value: 45.279 - type: ndcg_at_3 value: 43.35 - type: ndcg_at_5 value: 41.564 - type: precision_at_1 value: 49.845 - type: precision_at_10 value: 28.544999999999998 - type: precision_at_100 value: 9.322 - type: precision_at_1000 value: 2.258 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.913000000000004 - type: recall_at_1 value: 6.557 - type: recall_at_10 value: 19.5 - type: recall_at_100 value: 37.153999999999996 - type: recall_at_1000 value: 69.581 - type: recall_at_3 value: 12.133 - type: recall_at_5 value: 15.43 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 31.740000000000002 - type: map_at_10 value: 48.150999999999996 - type: map_at_100 value: 49.125 - type: map_at_1000 value: 49.149 - type: map_at_3 value: 43.645 - type: map_at_5 value: 46.417 - type: mrr_at_1 value: 35.892 - type: mrr_at_10 value: 50.524 - type: mrr_at_100 value: 51.232 - type: mrr_at_1000 value: 51.24999999999999 - type: mrr_at_3 value: 46.852 - type: mrr_at_5 value: 49.146 - type: ndcg_at_1 value: 35.892 - type: ndcg_at_10 value: 56.08800000000001 - type: ndcg_at_100 value: 60.077000000000005 - type: ndcg_at_1000 value: 60.632 - type: ndcg_at_3 value: 47.765 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 35.892 - type: precision_at_10 value: 9.296 - type: precision_at_100 value: 1.154 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.92 - type: precision_at_5 value: 15.781999999999998 - type: recall_at_1 value: 31.740000000000002 - type: recall_at_10 value: 77.725 - type: recall_at_100 value: 94.841 - type: recall_at_1000 value: 99.003 - type: recall_at_3 value: 56.407 - type: recall_at_5 value: 66.848 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.429 - type: map_at_10 value: 85.42699999999999 - type: map_at_100 value: 86.063 - type: map_at_1000 value: 86.077 - type: map_at_3 value: 82.573 - type: map_at_5 value: 84.371 - type: mrr_at_1 value: 82.34 - type: mrr_at_10 value: 88.247 - type: mrr_at_100 value: 88.357 - type: mrr_at_1000 value: 88.357 - type: mrr_at_3 value: 87.38 - type: mrr_at_5 value: 87.981 - type: ndcg_at_1 value: 82.34 - type: ndcg_at_10 value: 88.979 - type: ndcg_at_100 value: 90.18599999999999 - type: ndcg_at_1000 value: 90.254 - type: ndcg_at_3 value: 86.378 - type: ndcg_at_5 value: 87.821 - type: precision_at_1 value: 82.34 - type: precision_at_10 value: 13.482 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.852999999999994 - type: precision_at_5 value: 24.798000000000002 - type: recall_at_1 value: 71.429 - type: recall_at_10 value: 95.64099999999999 - type: recall_at_100 value: 99.723 - type: recall_at_1000 value: 99.98 - type: recall_at_3 value: 88.011 - type: recall_at_5 value: 92.246 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.62148584103299 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.2923987272903 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.128 - type: map_at_10 value: 14.63 - type: map_at_100 value: 17.285 - type: map_at_1000 value: 17.676 - type: map_at_3 value: 9.993 - type: map_at_5 value: 12.286999999999999 - type: mrr_at_1 value: 25.4 - type: mrr_at_10 value: 38.423 - type: mrr_at_100 value: 39.497 - type: mrr_at_1000 value: 39.531 - type: mrr_at_3 value: 34.9 - type: mrr_at_5 value: 37.01 - type: ndcg_at_1 value: 25.4 - type: ndcg_at_10 value: 24.062 - type: ndcg_at_100 value: 33.823 - type: ndcg_at_1000 value: 39.663 - type: ndcg_at_3 value: 22.246 - type: ndcg_at_5 value: 19.761 - type: precision_at_1 value: 25.4 - type: precision_at_10 value: 12.85 - type: precision_at_100 value: 2.71 - type: precision_at_1000 value: 0.41000000000000003 - type: precision_at_3 value: 21.4 - type: precision_at_5 value: 17.86 - type: recall_at_1 value: 5.128 - type: recall_at_10 value: 26.06 - type: recall_at_100 value: 54.993 - type: recall_at_1000 value: 83.165 - type: recall_at_3 value: 13.003 - type: recall_at_5 value: 18.117 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.5466779326323 - type: cos_sim_spearman value: 82.79782085421951 - type: euclidean_pearson value: 84.76929982677339 - type: euclidean_spearman value: 82.51802536005597 - type: manhattan_pearson value: 84.76736312526177 - type: manhattan_spearman value: 82.50799656335593 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.40486308108694 - type: cos_sim_spearman value: 77.12670500926937 - type: euclidean_pearson value: 85.23836845503847 - type: euclidean_spearman value: 78.41475117006176 - type: manhattan_pearson value: 85.24302039610805 - type: manhattan_spearman value: 78.4053162562707 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.83570289087565 - type: cos_sim_spearman value: 89.28563503553643 - type: euclidean_pearson value: 87.77516003996445 - type: euclidean_spearman value: 88.8656149534085 - type: manhattan_pearson value: 87.75568872417946 - type: manhattan_spearman value: 88.80445489340585 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.776406555485 - type: cos_sim_spearman value: 83.8288465070091 - type: euclidean_pearson value: 85.37827999808123 - type: euclidean_spearman value: 84.11079529992739 - type: manhattan_pearson value: 85.35336495689121 - type: manhattan_spearman value: 84.08618492649347 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.57644404820684 - type: cos_sim_spearman value: 89.69728364350713 - type: euclidean_pearson value: 88.28202320389443 - type: euclidean_spearman value: 88.9560567319321 - type: manhattan_pearson value: 88.29461100044172 - type: manhattan_spearman value: 88.96030920678558 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.05211938460621 - type: cos_sim_spearman value: 86.43413865667489 - type: euclidean_pearson value: 85.62760689259562 - type: euclidean_spearman value: 86.28867831982394 - type: manhattan_pearson value: 85.60828879163458 - type: manhattan_spearman value: 86.27823731462473 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.00254140466377 - type: cos_sim_spearman value: 89.66118745178284 - type: euclidean_pearson value: 89.46985446236553 - type: euclidean_spearman value: 88.92649032371526 - type: manhattan_pearson value: 89.49600028180247 - type: manhattan_spearman value: 88.86948431519099 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 68.93578321067938 - type: cos_sim_spearman value: 69.60639595839257 - type: euclidean_pearson value: 70.33485090574897 - type: euclidean_spearman value: 69.03380379185452 - type: manhattan_pearson value: 70.42097254943839 - type: manhattan_spearman value: 69.25296348304255 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.29588700755069 - type: cos_sim_spearman value: 88.30389489193672 - type: euclidean_pearson value: 87.60349838180346 - type: euclidean_spearman value: 87.91041868311692 - type: manhattan_pearson value: 87.59373630607907 - type: manhattan_spearman value: 87.88690174001724 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.8030655700857 - type: mrr value: 96.3950637234951 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 60.028000000000006 - type: map_at_10 value: 69.855 - type: map_at_100 value: 70.257 - type: map_at_1000 value: 70.283 - type: map_at_3 value: 66.769 - type: map_at_5 value: 68.679 - type: mrr_at_1 value: 62.666999999999994 - type: mrr_at_10 value: 70.717 - type: mrr_at_100 value: 71.00800000000001 - type: mrr_at_1000 value: 71.033 - type: mrr_at_3 value: 68.389 - type: mrr_at_5 value: 69.939 - type: ndcg_at_1 value: 62.666999999999994 - type: ndcg_at_10 value: 74.715 - type: ndcg_at_100 value: 76.364 - type: ndcg_at_1000 value: 76.89399999999999 - type: ndcg_at_3 value: 69.383 - type: ndcg_at_5 value: 72.322 - type: precision_at_1 value: 62.666999999999994 - type: precision_at_10 value: 10.067 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.267 - type: recall_at_1 value: 60.028000000000006 - type: recall_at_10 value: 88.822 - type: recall_at_100 value: 96.167 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 74.367 - type: recall_at_5 value: 81.661 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.84554455445544 - type: cos_sim_ap value: 96.54482863244152 - type: cos_sim_f1 value: 92.13709677419355 - type: cos_sim_precision value: 92.88617886178862 - type: cos_sim_recall value: 91.4 - type: dot_accuracy value: 99.76039603960396 - type: dot_ap value: 93.20115278887057 - type: dot_f1 value: 87.92079207920793 - type: dot_precision value: 87.05882352941177 - type: dot_recall value: 88.8 - type: euclidean_accuracy value: 99.84950495049505 - type: euclidean_ap value: 96.53268343961348 - type: euclidean_f1 value: 92.23697650663942 - type: euclidean_precision value: 94.258872651357 - type: euclidean_recall value: 90.3 - type: manhattan_accuracy value: 99.85346534653465 - type: manhattan_ap value: 96.54495433438355 - type: manhattan_f1 value: 92.51012145748987 - type: manhattan_precision value: 93.64754098360656 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85346534653465 - type: max_ap value: 96.54495433438355 - type: max_f1 value: 92.51012145748987 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.46940443952006 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.396194493841584 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.881717673695555 - type: mrr value: 55.73439224174519 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.438177268254087 - type: cos_sim_spearman value: 30.96177698848688 - type: dot_pearson value: 30.513850376431435 - type: dot_spearman value: 29.932421046509706 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.21 - type: map_at_10 value: 1.727 - type: map_at_100 value: 9.881 - type: map_at_1000 value: 24.245 - type: map_at_3 value: 0.615 - type: map_at_5 value: 0.966 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 87.333 - type: mrr_at_100 value: 87.333 - type: mrr_at_1000 value: 87.333 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 87.333 - type: ndcg_at_1 value: 74.0 - type: ndcg_at_10 value: 69.12700000000001 - type: ndcg_at_100 value: 53.893 - type: ndcg_at_1000 value: 49.639 - type: ndcg_at_3 value: 74.654 - type: ndcg_at_5 value: 73.232 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 72.8 - type: precision_at_100 value: 55.42 - type: precision_at_1000 value: 21.73 - type: precision_at_3 value: 79.333 - type: precision_at_5 value: 77.2 - type: recall_at_1 value: 0.21 - type: recall_at_10 value: 1.9709999999999999 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.961999999999996 - type: recall_at_3 value: 0.66 - type: recall_at_5 value: 1.052 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.456 - type: map_at_10 value: 9.426 - type: map_at_100 value: 16.066 - type: map_at_1000 value: 17.652 - type: map_at_3 value: 5.2459999999999996 - type: map_at_5 value: 6.5360000000000005 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 47.666 - type: mrr_at_100 value: 48.681999999999995 - type: mrr_at_1000 value: 48.681999999999995 - type: mrr_at_3 value: 43.878 - type: mrr_at_5 value: 46.224 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 23.454 - type: ndcg_at_100 value: 36.616 - type: ndcg_at_1000 value: 48.596000000000004 - type: ndcg_at_3 value: 28.267999999999997 - type: ndcg_at_5 value: 25.630999999999997 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 20.204 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 1.5709999999999997 - type: precision_at_3 value: 29.252 - type: precision_at_5 value: 24.898 - type: recall_at_1 value: 2.456 - type: recall_at_10 value: 14.951 - type: recall_at_100 value: 48.399 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 6.1370000000000005 - type: recall_at_5 value: 8.671 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.86240000000001 - type: ap value: 14.678570078747494 - type: f1 value: 55.295967793934445 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.17374080362195 - type: f1 value: 59.54410874861454 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.91227822485289 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.12523097097217 - type: cos_sim_ap value: 77.59606075943269 - type: cos_sim_f1 value: 71.11395646606915 - type: cos_sim_precision value: 69.07960199004975 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 84.68736961316088 - type: dot_ap value: 68.47167450741459 - type: dot_f1 value: 64.42152354914874 - type: dot_precision value: 60.887949260042284 - type: dot_recall value: 68.3905013192612 - type: euclidean_accuracy value: 86.88084878106932 - type: euclidean_ap value: 77.27351204978599 - type: euclidean_f1 value: 70.99179716629381 - type: euclidean_precision value: 67.10526315789474 - type: euclidean_recall value: 75.35620052770449 - type: manhattan_accuracy value: 86.83316445133218 - type: manhattan_ap value: 77.21835357308716 - type: manhattan_f1 value: 71.05587004676349 - type: manhattan_precision value: 66.58210332103322 - type: manhattan_recall value: 76.17414248021109 - type: max_accuracy value: 87.12523097097217 - type: max_ap value: 77.59606075943269 - type: max_f1 value: 71.11395646606915 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.97232894787906 - type: cos_sim_ap value: 85.9613736469497 - type: cos_sim_f1 value: 78.40216655382532 - type: cos_sim_precision value: 72.97512437810946 - type: cos_sim_recall value: 84.70126270403449 - type: dot_accuracy value: 88.04866689952264 - type: dot_ap value: 83.15465089499936 - type: dot_f1 value: 76.32698287879329 - type: dot_precision value: 71.23223697378077 - type: dot_recall value: 82.20665229442562 - type: euclidean_accuracy value: 88.67543757519307 - type: euclidean_ap value: 85.4524355531532 - type: euclidean_f1 value: 77.78729106950081 - type: euclidean_precision value: 75.3009009009009 - type: euclidean_recall value: 80.44348629504158 - type: manhattan_accuracy value: 88.65991384328792 - type: manhattan_ap value: 85.43109069046837 - type: manhattan_f1 value: 77.72639551396425 - type: manhattan_precision value: 73.73402417962004 - type: manhattan_recall value: 82.17585463504774 - type: max_accuracy value: 88.97232894787906 - type: max_ap value: 85.9613736469497 - type: max_f1 value: 78.40216655382532 --- <h1 align="center">GIST Large Embedding v0</h1> *GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning* The model is fine-tuned on top of the [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task). The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions. Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829) # Data The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available: - Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets) - Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`). The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741). The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some. The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance. # Usage The model can be easily loaded using the Sentence Transformers library. ```Python import torch.nn.functional as F from sentence_transformers import SentenceTransformer revision = None # Replace with the specific revision to ensure reproducibility if the model is updated. model = SentenceTransformer("avsolatorio/GIST-large-Embedding-v0", revision=revision) texts = [ "Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.", "Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.", "As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes" ] # Compute embeddings embeddings = model.encode(texts, convert_to_tensor=True) # Compute cosine-similarity for each pair of sentences scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1) print(scores.cpu().numpy()) ``` # Training Parameters Below are the training parameters used to fine-tune the model: ``` Epochs = 40 Warmup ratio = 0.1 Learning rate = 5e-6 Batch size = 16 Checkpoint step = 171000 Contrastive loss temperature = 0.01 ``` # Evaluation The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite. # Citation Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗 ``` @article{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, journal={arXiv preprint arXiv:2402.16829}, year={2024}, URL={https://arxiv.org/abs/2402.16829} eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` # Acknowledgements This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444. The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.
openai-community/roberta-large-openai-detector
openai-community
"2024-04-10T09:56:29Z"
43,168
18
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1904.09751", "arxiv:1910.09700", "arxiv:1908.09203", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:04Z"
--- language: en license: mit tags: - exbert datasets: - bookcorpus - wikipedia --- # RoBERTa Large OpenAI Detector ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) - [How To Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version. - **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list - **Model Type:** Fine-tuned transformer-based language model - **Language(s):** English - **License:** MIT - **Related Models:** [RoBERTa large](https://huggingface.co/roberta-large), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection). - [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) - [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/) - [Explore the detector model here](https://huggingface.co/openai-detector ) ## Uses #### Direct Use The model is a classifier that can be used to detect text generated by GPT-2 models. #### Downstream Use The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: > We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. #### Bias Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa large](https://huggingface.co/roberta-large) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf). ## Training #### Training Data The model is a sequence classifier based on RoBERTa large (see the [RoBERTa large model card](https://huggingface.co/roberta-large) for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)). #### Training Procedure The model developers write that: > We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: > To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf). #### Testing Data, Factors and Metrics The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: > testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. #### Results The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf): > Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results. ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications The model developers write that: See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details. ## Citation Information ```bibtex @article{solaiman2019release, title={Release strategies and the social impacts of language models}, author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others}, journal={arXiv preprint arXiv:1908.09203}, year={2019} } ``` APA: - Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203. https://huggingface.co/papers/1908.09203 ## Model Card Authors This model card was written by the team at Hugging Face. ## How to Get Started with the Model More information needed
Yntec/HyperRealism
Yntec
"2024-04-13T02:44:45Z"
43,140
4
diffusers
[ "diffusers", "safetensors", "Photorealistic", "Analog", "Female", "alexds9", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-04-13T02:00:55Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Photorealistic - Analog - Female - alexds9 - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Hyper Realism 1.2 Original page: https://civitai.com/models/158959?modelVersionId=178706 ![Free AI image geneator Hyper Realism Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/81542ke1L3FQ4TzRgHhKr.png) (Click for larger) Top left: iphone, Professional fine details photo of pretty cute little girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes. instagram, portrait Top right: analog style 70s color photograph of young jean claude van damme in Double Impact, star wars behind the scenes Bottom left: Hyperrealistic 1990 movie screenshot Santa Claus with wife and daughter enjoying wine with candles. sitting with a pretty cute little girl, Closeup Faces, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom Bottom right: analog style 70s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront
openchat/openchat-3.5-1210
openchat
"2024-05-18T18:10:44Z"
42,986
276
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "openchat", "C-RLFT", "conversational", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:kaist-ai/Feedback-Collection", "dataset:imone/OpenOrca_FLAN", "dataset:LDJnr/Capybara", "dataset:tiedong/goat", "dataset:glaiveai/glaive-code-assistant", "dataset:meta-math/MetaMathQA", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:TIGER-Lab/MathInstruct", "arxiv:2309.11235", "arxiv:2303.08774", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T07:40:14Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - openchat - mistral - C-RLFT datasets: - openchat/openchat_sharegpt4_dataset - kaist-ai/Feedback-Collection - imone/OpenOrca_FLAN - LDJnr/Capybara - tiedong/goat - glaiveai/glaive-code-assistant - meta-math/MetaMathQA - OpenAssistant/oasst_top1_2023-08-25 - TIGER-Lab/MathInstruct library_name: transformers pipeline_tag: text-generation --- <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> <h1>Advancing Open-source Language Models with Mixed-Quality Data</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://openchat.team"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/imoneoi/openchat"> <img src="https://camo.githubusercontent.com/4133dc1cd4511d4a292b84ce10e52e4ed92569fb2a8165381c9c47be5edc2796/68747470733a2f2f6564656e742e6769746875622e696f2f537570657254696e7949636f6e732f696d616765732f706e672f6769746875622e706e67" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="https://arxiv.org/pdf/2309.11235.pdf"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/pQjnXvNKHY"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> <p align="center" style="margin-top: 0px;"> <span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span> <img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> </p> <div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center; ont-size: 0.5em; border: 0.8em solid #3c72db;"> <a href="https://huggingface.co/openchat/openchat_3.5" style="text-decoration: none; color: black;"> <span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.5</span> <span style="font-size: 0.7em; font-family: 'Helvetica'; color: white; vertical-align: top; background-color:red; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">1210</span> <span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;"> <br> 🏆 The Overall Best Performing Open Source 7B Model 🏆 <br> 🤖 Outperforms <span style="font-weight: bold;">ChatGPT</span> (March) and <span style="font-weight: bold;">Grok-1</span> 🤖 <br> 🚀<span style="font-size: 1em; font-family: 'Helvetica'; color: black; font-weight: bold;">15</span>-point improvement in Coding over <span style="font-size: 0.9em; font-family: 'Helvetica'; color: black; font-weight: bold;">OpenChat-3.5🚀</span> <br><br><span style="font-size: 1em; font-family: 'Helvetica'; color: #3c72db; font-weight: bold;">New Features</span> <br> 💡 2 Modes: Coding + Generalist, Mathematical Reasoning 💡 <br> 🧑‍⚖️ Experimental support for Evaluator and Feedback capabilities 🧑‍⚖️ </span> </a> </div> <div style="display: flex; justify-content: center; align-items: center"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/1210bench.png?raw=true" style="width: 100%; border-radius: 1em"> </div> <div> <h3> Table of Contents</h3> </div> 1. [Usage](#usage) 2. [Benchmarks](#benchmarks) 3. [Limitations](#limitations) 4. [License](#license) 5. [Dataset Details](#dataset-details) 6. [Citation](#citation) 7. [Acknowledgements](#acknowledgements) <div align="center"> <h2> Usage </h2> </div> To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. | Model | Size | Context | Weights | Serving | |-------------------|------|---------|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 1210 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.5-1210) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.5-1210 --engine-use-ray --worker-use-ray` | <details> <summary>Example request (click to expand)</summary> 💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` 🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Math Correct", "messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}] }' ``` </details> ### Conversation templates 💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` 🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems ``` Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant: ``` ⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token. The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` <div align="center"> <h2> (Experimental) Evaluator / Feedback Capabilities </h2> </div> We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response. ``` ###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: {orig_instruction} ###Response to evaluate: {orig_response} ###Reference Answer (Score 5): {orig_reference_answer} ###Score Rubrics: [{orig_criteria}] Score 1: {orig_score1_description} Score 2: {orig_score2_description} Score 3: {orig_score3_description} Score 4: {orig_score4_description} Score 5: {orig_score5_description} ###Feedback: ``` <div align="center"> <h2> Benchmarks </h2> </div> | Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT | |--------------------|----------|----------|--------------|-----------------|----------|----------|---------------|--------------|--------------|-------------| | OpenChat-3.5-1210 | **7B** | **63.8** | 7.76 | **68.9** | **49.5** | **48.0** | **61.8** | 65.3 | **77.3** | 61.8 | | OpenChat-3.5 | **7B** | 61.6 | 7.81 | 55.5 | 47.6 | 47.4 | 59.1 | 64.3 | **77.3** | 63.5 | | ChatGPT (March)* | ? | 61.5 | **7.94** | 48.1 | 47.6 | 47.1 | 57.7 | **67.3** | 74.9 | **70.1** | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 48.2 | 49.4 | 46.5 | 57.5 | 63.8 | 73.5 | 59.9 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 | | Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - | <details> <summary>Evaluation Details(click to expand)</summary> *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). </details> <div> <h3>HumanEval+</h3> </div> | Model | Size | HumanEval+ pass@1 | |-----------------------------|----------|------------| | ChatGPT (December 12, 2023) | - | 64.6 | | WizardCoder-Python-34B-V1.0 | 34B | 64.6 | | **OpenChat 3.5 (Dec 10)** | **7B** | **63.4** | | OpenHermes 2.5 | 7B | 41.5 | <div> <h3>OpenChat-3.5-1210 vs. Grok</h3> </div> | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |-------------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** | | OpenChat 3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ???B | 55.8 | 73 | 63.2 | 23.9 | 62.9 | *: Grok results are reported by [X.AI](https://x.ai/). <div align="center"> <h2> 中文评估结果 / Chinese Evaluations </h2> </div> ⚠️ Note that this model was not explicitly trained in Chinese (only < 0.1% of the data is in Chinese). 请注意本模型没有针对性训练中文(中文数据占比小于0.1%)。 <div> <h3>Multi-Level Multi-Discipline Chinese Evaluation Suite (CEVAL)</h3> <div> | Model | Avg | STEM | Social Science | Humanities | Others | |----------|-------|-------|----------------|------------|--------| | ChatGPT | 54.4 | 52.9 | 61.8 | 50.9 | 53.6 | | OpenChat | 47.29 | 45.22 | 52.49 | 48.52 | 45.08 | <div> <h3>Massive Multitask Language Understanding in Chinese (CMMLU, 5-shot)</h3> </div> | Models | STEM | Humanities | SocialSciences | Other | ChinaSpecific | Avg | |----------|-------|------------|----------------|-------|---------------|-------| | ChatGPT | 47.81 | 55.68 | 56.5 | 62.66 | 50.69 | 55.51 | | OpenChat | 38.7 | 45.99 | 48.32 | 50.23 | 43.27 | 45.85 | <div align="center"> <h2> Limitations </h2> </div> **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. <div align="center"> <h2> License </h2> </div> Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. <div align="center"> <h2> Dataset Details </h2> </div> OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-quality instruction data, with a custom processing pipeline. We detail some notable subsets included here: - [OpenChat ShareGPT](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) - [Open-Orca with FLAN answers](https://huggingface.co/datasets/imone/OpenOrca_FLAN) - [Feedback-Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection) - [Capybara](https://huggingface.co/datasets/LDJnr/Capybara) (de-contaminated against MT-bench) - [GOAT](https://huggingface.co/datasets/tiedong/goat) - [Glaive](https://huggingface.co/datasets/glaiveai/glaive-code-assistant) - [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) - [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) - [OpenAssistant](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) <div align="center"> <h2> Citation </h2> </div> ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Project Lead:** - Guan Wang [imonenext at gmail dot com] - [Alpay Ariyak](https://github.com/alpayariyak) [aariyak at wpi dot edu]
TheBloke/Mistral-7B-OpenOrca-GPTQ
TheBloke
"2023-10-16T08:48:47Z"
42,917
100
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:Open-Orca/OpenOrca", "arxiv:2306.02707", "arxiv:2301.13688", "base_model:Open-Orca/Mistral-7B-OpenOrca", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-10-02T14:28:09Z"
--- base_model: Open-Orca/Mistral-7B-OpenOrca datasets: - Open-Orca/OpenOrca inference: false language: - en library_name: transformers license: apache-2.0 model_creator: OpenOrca model_name: Mistral 7B OpenOrca model_type: mistral pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B OpenOrca - GPTQ - Model creator: [OpenOrca](https://huggingface.co/Open-Orca) - Original model: [Mistral 7B OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) <!-- description start --> ## Description This repo contains GPTQ model files for [OpenOrca's Mistral 7B OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF) * [OpenOrca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mistral-7B-OpenOrca-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mistral-7B-OpenOrca-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mistral-7B-OpenOrca-GPTQ`: ```shell mkdir Mistral-7B-OpenOrca-GPTQ huggingface-cli download TheBloke/Mistral-7B-OpenOrca-GPTQ --local-dir Mistral-7B-OpenOrca-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistral-7B-OpenOrca-GPTQ huggingface-cli download TheBloke/Mistral-7B-OpenOrca-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Mistral-7B-OpenOrca-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mistral-7B-OpenOrca-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-OpenOrca-GPTQ --local-dir Mistral-7B-OpenOrca-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mistral-7B-OpenOrca-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mistral-7B-OpenOrca-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mistral-7B-OpenOrca-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Mistral-7B-OpenOrca-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mistral-7B-OpenOrca-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenOrca's Mistral 7B OpenOrca <p><h1>🐋 TBD 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo") [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # OpenOrca - Mistral - 7B - 8k We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). HF Leaderboard evals place this model as #2 for all models smaller than 30B at release time, outperforming all but one 13B model. TBD Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai or on the OpenAccess AI Collective Discord for more information about Axolotl trainer here: https://discord.gg/5y8STgB3P3 # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. ## Example Prompt Exchange TBD # Evaluation We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base model. TBD ## HuggingFaceH4 Open LLM Leaderboard Performance TBD ## GPT4ALL Leaderboard Performance TBD # Dataset We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. # Training We trained with 8x A6000 GPUs for 62 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$400. # Citation ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf
RichardErkhov
"2024-06-27T12:09:22Z"
42,913
0
null
[ "gguf", "region:us" ]
null
"2024-06-27T08:36:45Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-15B-Instruct-zeroed-ft - GGUF - Model creator: https://huggingface.co/elinas/ - Original model: https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-15B-Instruct-zeroed-ft.Q2_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q2_K.gguf) | Q2_K | 5.35GB | | [Llama-3-15B-Instruct-zeroed-ft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.IQ3_XS.gguf) | IQ3_XS | 5.94GB | | [Llama-3-15B-Instruct-zeroed-ft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.IQ3_S.gguf) | IQ3_S | 6.24GB | | [Llama-3-15B-Instruct-zeroed-ft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q3_K_S.gguf) | Q3_K_S | 6.21GB | | [Llama-3-15B-Instruct-zeroed-ft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.IQ3_M.gguf) | IQ3_M | 6.43GB | | [Llama-3-15B-Instruct-zeroed-ft.Q3_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q3_K.gguf) | Q3_K | 6.87GB | | [Llama-3-15B-Instruct-zeroed-ft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q3_K_M.gguf) | Q3_K_M | 6.87GB | | [Llama-3-15B-Instruct-zeroed-ft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q3_K_L.gguf) | Q3_K_L | 7.43GB | | [Llama-3-15B-Instruct-zeroed-ft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.IQ4_XS.gguf) | IQ4_XS | 7.14GB | | [Llama-3-15B-Instruct-zeroed-ft.Q4_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q4_0.gguf) | Q4_0 | 8.0GB | | [Llama-3-15B-Instruct-zeroed-ft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.IQ4_NL.gguf) | IQ4_NL | 8.08GB | | [Llama-3-15B-Instruct-zeroed-ft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q4_K_S.gguf) | Q4_K_S | 8.05GB | | [Llama-3-15B-Instruct-zeroed-ft.Q4_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q4_K.gguf) | Q4_K | 8.48GB | | [Llama-3-15B-Instruct-zeroed-ft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q4_K_M.gguf) | Q4_K_M | 8.48GB | | [Llama-3-15B-Instruct-zeroed-ft.Q4_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q4_1.gguf) | Q4_1 | 8.84GB | | [Llama-3-15B-Instruct-zeroed-ft.Q5_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q5_0.gguf) | Q5_0 | 9.68GB | | [Llama-3-15B-Instruct-zeroed-ft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q5_K_S.gguf) | Q5_K_S | 9.68GB | | [Llama-3-15B-Instruct-zeroed-ft.Q5_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q5_K.gguf) | Q5_K | 9.93GB | | [Llama-3-15B-Instruct-zeroed-ft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q5_K_M.gguf) | Q5_K_M | 9.93GB | | [Llama-3-15B-Instruct-zeroed-ft.Q5_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q5_1.gguf) | Q5_1 | 10.53GB | | [Llama-3-15B-Instruct-zeroed-ft.Q6_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q6_K.gguf) | Q6_K | 11.48GB | | [Llama-3-15B-Instruct-zeroed-ft.Q8_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-ft-gguf/blob/main/Llama-3-15B-Instruct-zeroed-ft.Q8_0.gguf) | Q8_0 | 14.86GB | Original model description: --- base_model: - elinas/Llama-3-15B-Instruct-zeroed library_name: transformers tags: - mergekit - merge datasets: - Chat-Error/Pure-dove-sharegpt license: llama3 --- # Llama-3-15B-Instruct-zeroed-ft This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model is based on a "zeroed" passthrough merge of [Llama-3-15B-Instruct-zeroed](https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed) This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset. The model was finetuned on **8192 context length** and is likely reliable using RoPE up to 32k. Further finetuning this model or finetuning the [base model](https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed) on more samples is encouraged. ## Datasets * [Chat-Error/Pure-dove-sharegpt](https://huggingface.co/datasets/Chat-Error/Pure-dove-sharegpt) A small, high quality, dataset was used as a PoC / validation on stabilizing the model after finetuning. ## Finetuning details This is a QLoRA model and the following modules were targeted. ```yaml lora_target_modules: - down_proj - o_proj ``` The model is coherent even with training the "zeroed" layers and can write well. In the next experiment, all layers will be finetuned as this was the recommendation from [Charles Goddard](https://huggingface.co/chargoddard) - thank you for sharing the method of merging as well as Toasty Pigeon for bringing it to my attention! ```yaml The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 6 - total_eval_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - num_epochs: 1 ``` Optimizer `paged_adamw_8bit` and Deepspeed ZeRO 3 was used at a LR of `1e-5` using the cosine scheduler for 1 epoch on 3x3090s taking 2h 30m total. Sample packing and padding was disabled to reduce VRAM consumption significantly at the cost of speed. W&B Run Summary ``` wandb: Run summary: wandb: eval/loss 0.94497 wandb: eval/runtime 276.2864 wandb: eval/samples_per_second 1.397 wandb: eval/steps_per_second 0.235 wandb: total_flos 12246605365248.0 wandb: train/epoch 1.0 wandb: train/global_step 579 wandb: train/grad_norm 0.80411 wandb: train/learning_rate 0.0 wandb: train/loss 1.085 wandb: train_loss 0.8834 wandb: train_runtime 9893.1688 wandb: train_samples_per_second 0.351 wandb: train_steps_per_second 0.059 ``` ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0 ## Model Evaluation TBD If you have any questions or comments on the model, feel free to open a discussion in the community tab. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
fluently/Fluently-XL-Final
fluently
"2024-06-06T16:03:36Z"
42,891
68
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "sdxl", "fluetnly-xl", "fluently", "trained", "text-to-image", "dataset:ehristoforu/midjourney-images", "dataset:ehristoforu/dalle-3-images", "dataset:ehristoforu/fav_images", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-06T10:59:35Z"
--- license: other license_name: fluently-license license_link: https://huggingface.co/spaces/fluently/License extra_gated_prompt: >- By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/spaces/fluently/License/blob/main/LICENSE.md) extra_gated_fields: Name: text Email: text Country: country Who you are?: type: select options: - 'Researcher' - 'Student' - 'Teacher' - 'Model creator' - 'Non-profit company' - 'Commercial company' datasets: - ehristoforu/midjourney-images - ehristoforu/dalle-3-images - ehristoforu/fav_images library_name: diffusers pipeline_tag: text-to-image base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - safetensors - stable-diffusion - sdxl - fluetnly-xl - fluently - trained inference: parameters: num_inference_steps: 25 guidance_scale: 5 negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation" --- # **Fluently XL** FINAL - the best XL-model ![preview](images/preview.png) *This is the **final release**. Improved overall aesthetics, improved lighting and more.* Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true. ## About this model The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others. ### Features - Correct anatomy - Art and realism in one - Controling contrast - Great nature - Great faces without AfterDetailer ### More info Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it. ## Using Optimal parameters in Automatic1111/ComfyUI: - Sampling steps: 20-35 - Sampler method: Euler a/Euler - CFG Scale: 4-6.5 ## End Let's remove models that copy each other from the top and put one that is actually developing, thank you)
google/byt5-base
google
"2023-01-24T16:36:53Z"
42,881
19
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:1907.06292", "arxiv:2105.13626", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- # ByT5 - Base ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Example Inference ByT5 works on raw UTF-8 bytes and can be used without a tokenizer: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens loss = model(input_ids, labels=labels).loss # forward pass ``` For batched inference & training it is however recommended using a tokenizer class for padding: ```python from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('google/byt5-base') tokenizer = AutoTokenizer.from_pretrained('google/byt5-base') model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt") labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids loss = model(**model_inputs, labels=labels).loss # forward pass ``` ## Abstract Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/ByT5.png)
Helsinki-NLP/opus-mt-da-en
Helsinki-NLP
"2023-08-16T11:27:21Z"
42,853
4
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "da", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-da-en * source languages: da * target languages: en * OPUS readme: [da-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.en | 63.6 | 0.769 |
huawei-noah/TinyBERT_General_4L_312D
huawei-noah
"2021-05-19T20:03:32Z"
42,853
26
transformers
[ "transformers", "pytorch", "jax", "bert", "arxiv:1909.10351", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
TinyBERT: Distilling BERT for Natural Language Understanding ======== TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: [TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/abs/1909.10351) Citation ======== If you find TinyBERT useful in your research, please cite the following paper: ``` @article{jiao2019tinybert, title={Tinybert: Distilling bert for natural language understanding}, author={Jiao, Xiaoqi and Yin, Yichun and Shang, Lifeng and Jiang, Xin and Chen, Xiao and Li, Linlin and Wang, Fang and Liu, Qun}, journal={arXiv preprint arXiv:1909.10351}, year={2019} } ```
stablediffusionapi/pyros-nsfw-sdxl
stablediffusionapi
"2023-12-19T00:33:49Z"
42,848
10
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-12-19T00:28:26Z"
Invalid username or password.
failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5-GGUF
failspy
"2024-05-30T13:38:37Z"
42,792
16
transformers
[ "transformers", "gguf", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-05-28T18:26:02Z"
--- library_name: transformers license: llama3 --- # Llama-3-70B-Instruct-abliterated-v3.5 Model Card [My original Jupyter "cookbook" to replicate the methodology can be found here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) [My personal library o' code used](https://github.com/FailSpy/abliterator) (WIP, looking to improve and generalize) This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## V3.5? Second try. I felt that the V3 methodology of 70B wasn't well applied, and u/Nexesenex on reddit kinda confirmed my suspicions. So go blame them. :P This one has only a single layer modified(!) and that seems to have greatly reduced moralizing disclaimers. I hope you'll find this model better than 70B-V3! As well, this also fixes the tokenizer. ## Hang on, "abliteration"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2 70B? Well, I released a V2 a while back for 8B under Cognitive Computations. It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
dreamlike-art/dreamlike-diffusion-1.0
dreamlike-art
"2023-01-27T14:44:44Z"
42,782
1,016
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "artistic", "en", "license:other", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-12-11T04:16:04Z"
--- language: - en license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers inference: false --- # Dreamlike Diffusion 1.0 is SD 1.5 fine tuned on high quality art, made by [dreamlike.art](https://dreamlike.art/). # If you want to use dreamlike models on your website/app/etc., check the license at the bottom first! Use the same prompts as you would for SD 1.5. Add **dreamlikeart** if the artstyle is too weak. Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a 2:3 or a 9:16 aspect ratio. If you want a landscape photo, try using a 3:2 or a 16:9 aspect ratio. Use slightly higher resolution for better results: 640x640px, 512x768px, 768x512px, etc. # We've just released Dreamlike Photoreal 2.0, check it out! [https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0) <img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 400px;" width="100%"/> ### Examples <img src="https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/preview.jpg" style="max-width: 800px;" width="100%"/> <img src="https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/1.jpg" style="max-width: 800px;" width="100%"/> <img src="https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/2.jpg" style="max-width: 800px;" width="100%"/> ### dreamlike.art You can use this model for free on [dreamlike.art](https://dreamlike.art/)! <img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/resolve/main/dreamlike.jpg" style="max-width: 1000px;" width="100%"/> ### Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run dreamlike-diffusion-1.0: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/dreamlike-diffusion-1.0) ### CompVis [Download dreamlike-diffusion-1.0.ckpt (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/resolve/main/dreamlike-diffusion-1.0.ckpt) ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline import torch model_id = "dreamlike-art/dreamlike-diffusion-1.0" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "dreamlikeart, a grungy woman with rainbow hair, travelling between dimensions, dynamic pose, happy, soft eyes and narrow chin, extreme bokeh, dainty figure, long hair straight down, torn kawaii shirt and baggy jeans, In style of by Jordan Grimmer and greg rutkowski, crisp lines and color, complex background, particles, lines, wind, concept art, sharp focus, vivid colors" image = pipe(prompt).images[0] image.save("./result.jpg") ``` # License This model is licesed under a **modified** CreativeML OpenRAIL-M license. - **You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at contact@dreamlike.art** - **You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)** - **You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)** - **You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less** - You can't use the model to deliberately produce nor share illegal or harmful outputs or content - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license - You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the **modified** CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md
openai-community/openai-gpt
openai-community
"2024-02-19T12:39:20Z"
42,706
230
transformers
[ "transformers", "pytorch", "tf", "rust", "safetensors", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: en license: mit --- # OpenAI GPT 1 ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** `openai-gpt` (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. - **Developed by:** Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See [associated research paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GitHub repo](https://github.com/openai/finetune-transformer-lm) for model developers and contributors. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Medium](https://huggingface.co/gpt2-medium), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) - [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/) - [GitHub Repo](https://github.com/openai/finetune-transformer-lm) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"}, {'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'}, {'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'}, {'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'}, {'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}] ``` Here is how to use this model in PyTorch: ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = OpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` and in TensorFlow: ```python from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = TFOpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") outputs = model(inputs) last_hidden_states = outputs.last_hidden_state ``` ## Uses #### Direct Use This model can be used for language modeling tasks. #### Downstream Use Potential downstream uses of this model include tasks that leverage language models. In the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. #### Misuse and Out-of-scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases #### Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a teacher for the college he'}, {'generated_text': 'The man worked as a janitor at the club.'}, {'generated_text': 'The man worked as a bodyguard in america. the'}, {'generated_text': 'The man worked as a clerk for one of the'}, {'generated_text': 'The man worked as a nurse, but there was'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a medical intern but is a'}, {'generated_text': 'The woman worked as a midwife, i know that'}, {'generated_text': 'The woman worked as a prostitute in a sex club'}, {'generated_text': 'The woman worked as a secretary for one of the'}, {'generated_text': 'The woman worked as a nurse, but she had'}] ``` This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations The model developers also wrote in a [blog post](https://openai.com/blog/language-unsupervised/) about risks and limitations of the model, including: > - **Compute Requirements:** Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements. > - **The limits and bias of learning about the world through text:** Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work ([Lucy and Gauthier, 2017](https://arxiv.org/abs/1705.11168)) has shown that certain kinds of information are difficult to learn via just text and other work ([Gururangan et al., 2018](https://arxiv.org/abs/1803.02324)) has shown that models learn and exploit biases in data distributions. > - **Still brittle generalization:** Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet. ## Training #### Training Data The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > We use the BooksCorpus dataset ([Zhu et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf)) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. #### Training Procedure The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. ## Evaluation The following evaluation information is extracted from the [associated blog post](https://openai.com/blog/language-unsupervised/). See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for further details. #### Testing Data, Factors and Metrics The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: - **Task:** Textual Entailment - **Datasets:** [SNLI](https://huggingface.co/datasets/snli), [MNLI Matched](https://huggingface.co/datasets/glue), [MNLI Mismatched](https://huggingface.co/datasets/glue), [SciTail](https://huggingface.co/datasets/scitail), [QNLI](https://huggingface.co/datasets/glue), [RTE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Semantic Similarity - **Datasets:** [STS-B](https://huggingface.co/datasets/glue), [QQP](https://huggingface.co/datasets/glue), [MRPC](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Reading Comprehension - **Datasets:** [RACE](https://huggingface.co/datasets/race) - **Metrics:** Accuracy - **Task:** Commonsense Reasoning - **Datasets:** [ROCStories](https://huggingface.co/datasets/story_cloze), [COPA](https://huggingface.co/datasets/xcopa) - **Metrics:** Accuracy - **Task:** Sentiment Analysis - **Datasets:** [SST-2](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Linguistic Acceptability - **Datasets:** [CoLA](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Multi Task Benchmark - **Datasets:** [GLUE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy #### Results The model achieves the following results without any fine-tuning (zero-shot): | Task | TE | TE | TE |TE | TE | TE | SS | SS | SS | RC | CR | CR | SA | LA | MTB | |:--------:|:--:|:----------:|:-------------:|:-----:|:----:|:---:|:---:|:---:|:--:|:----:|:--------:|:----:|:----:|:----:|:----:| | Dataset |SNLI|MNLI Matched|MNLI Mismatched|SciTail| QNLI | RTE |STS-B| QQP |MPRC|RACE |ROCStories|COPA | SST-2| CoLA | GLUE | | |89.9| 82.1 | 81.4 |88.3 | 88.1 | 56.0|82.0 | 70.3|82.3|59.0 | 86.5 | 78.6 | 91.3 | 45.4 | 72.8 | ## Environmental Impact The model developers [report that](https://openai.com/blog/language-unsupervised/): > The total compute used to train this model was 0.96 petaflop days (pfs-days). > 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8 P600 GPUs - **Hours used:** 720 hours (30 days) - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2018improving, title={Improving language understanding by generative pre-training}, author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others}, year={2018}, publisher={OpenAI} } ``` APA: *Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.* ## Model Card Authors This model card was written by the Hugging Face team.
facebook/convnext-large-224
facebook
"2023-06-13T19:39:50Z"
42,698
25
transformers
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-large-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
OrdalieTech/Solon-embeddings-large-0.1
OrdalieTech
"2024-03-26T15:53:50Z"
42,670
43
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "mteb", "fr", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-12-09T09:42:17Z"
--- tags: - mteb model-index: - name: Solon-embeddings-large-0.1 results: - task: type: sentence-similarity name: Passage Retrieval dataset: type: unicamp-dl/mmarco name: mMARCO-fr config: french split: validation metrics: - type: recall_at_500 name: Recall@500 value: 92.7 - type: recall_at_100 name: Recall@100 value: 82.7 - type: recall_at_10 name: Recall@10 value: 55.5 - type: map_at_10 name: MAP@10 value: 29.4 - type: ndcg_at_10 name: nDCG@10 value: 35.8 - type: mrr_at_10 name: MRR@10 value: 29.9 - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringP2P config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 64.16942168287153 - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringS2S config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 38.17076313383054 - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-alloprof-s2p name: MTEB AlloprofReranking config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 64.8770878097632 - type: mrr value: 66.39132423169396 - task: type: Retrieval dataset: type: lyon-nlp/alloprof name: MTEB AlloprofRetrieval config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 29.62 - type: map_at_10 value: 40.963 - type: map_at_100 value: 41.894 - type: map_at_1000 value: 41.939 - type: map_at_3 value: 37.708999999999996 - type: map_at_5 value: 39.696999999999996 - type: mrr_at_1 value: 29.62 - type: mrr_at_10 value: 40.963 - type: mrr_at_100 value: 41.894 - type: mrr_at_1000 value: 41.939 - type: mrr_at_3 value: 37.708999999999996 - type: mrr_at_5 value: 39.696999999999996 - type: ndcg_at_1 value: 29.62 - type: ndcg_at_10 value: 46.942 - type: ndcg_at_100 value: 51.629999999999995 - type: ndcg_at_1000 value: 52.927 - type: ndcg_at_3 value: 40.333999999999996 - type: ndcg_at_5 value: 43.922 - type: precision_at_1 value: 29.62 - type: precision_at_10 value: 6.589 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 15.976 - type: precision_at_5 value: 11.33 - type: recall_at_1 value: 29.62 - type: recall_at_10 value: 65.889 - type: recall_at_100 value: 88.212 - type: recall_at_1000 value: 98.575 - type: recall_at_3 value: 47.927 - type: recall_at_5 value: 56.64900000000001 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.077999999999996 - type: f1 value: 40.64511241732637 - task: type: Retrieval dataset: type: maastrichtlawtech/bsard name: MTEB BSARDRetrieval config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 0.901 - type: map_at_10 value: 1.524 - type: map_at_100 value: 1.833 - type: map_at_1000 value: 1.916 - type: map_at_3 value: 1.276 - type: map_at_5 value: 1.276 - type: mrr_at_1 value: 0.901 - type: mrr_at_10 value: 1.524 - type: mrr_at_100 value: 1.833 - type: mrr_at_1000 value: 1.916 - type: mrr_at_3 value: 1.276 - type: mrr_at_5 value: 1.276 - type: ndcg_at_1 value: 0.901 - type: ndcg_at_10 value: 2.085 - type: ndcg_at_100 value: 3.805 - type: ndcg_at_1000 value: 6.704000000000001 - type: ndcg_at_3 value: 1.41 - type: ndcg_at_5 value: 1.41 - type: precision_at_1 value: 0.901 - type: precision_at_10 value: 0.40499999999999997 - type: precision_at_100 value: 0.126 - type: precision_at_1000 value: 0.037 - type: precision_at_3 value: 0.601 - type: precision_at_5 value: 0.36 - type: recall_at_1 value: 0.901 - type: recall_at_10 value: 4.054 - type: recall_at_100 value: 12.613 - type: recall_at_1000 value: 36.937 - type: recall_at_3 value: 1.802 - type: recall_at_5 value: 1.802 - task: type: BitextMining dataset: type: rbawden/DiaBLa name: MTEB DiaBLaBitextMining (fr-en) config: fr-en split: test revision: 5345895c56a601afe1a98519ce3199be60a27dba metrics: - type: accuracy value: 88.90048712595686 - type: f1 value: 86.94952864886115 - type: precision value: 86.20344379175826 - type: recall value: 88.90048712595686 - task: type: Clustering dataset: type: lyon-nlp/clustering-hal-s2s name: MTEB HALClusteringS2S config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 24.087988843991155 - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringP2P config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 43.79603865728535 - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringS2S config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 37.746550373003 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.26088318196052 - type: f1 value: 88.95811185929033 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.55308487316003 - type: f1 value: 48.2936682439785 - task: type: Classification dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClassification (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 81.51658767772511 - type: f1 value: 77.695234448912 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringP2P (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 40.80377094681114 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringS2S (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 28.79703837416241 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.40080699394755 - type: f1 value: 65.60793135686376 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.29455279085406 - type: f1 value: 70.80876673828983 - task: type: Retrieval dataset: type: jinaai/mintakaqa name: MTEB MintakaRetrieval (fr) config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 16.625999999999998 - type: map_at_10 value: 25.224999999999998 - type: map_at_100 value: 26.291999999999998 - type: map_at_1000 value: 26.395000000000003 - type: map_at_3 value: 22.378999999999998 - type: map_at_5 value: 24.009 - type: mrr_at_1 value: 16.625999999999998 - type: mrr_at_10 value: 25.224999999999998 - type: mrr_at_100 value: 26.291999999999998 - type: mrr_at_1000 value: 26.395000000000003 - type: mrr_at_3 value: 22.378999999999998 - type: mrr_at_5 value: 24.009 - type: ndcg_at_1 value: 16.625999999999998 - type: ndcg_at_10 value: 30.074 - type: ndcg_at_100 value: 35.683 - type: ndcg_at_1000 value: 38.714999999999996 - type: ndcg_at_3 value: 24.188000000000002 - type: ndcg_at_5 value: 27.124 - type: precision_at_1 value: 16.625999999999998 - type: precision_at_10 value: 4.566 - type: precision_at_100 value: 0.729 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 9.801 - type: precision_at_5 value: 7.305000000000001 - type: recall_at_1 value: 16.625999999999998 - type: recall_at_10 value: 45.659 - type: recall_at_100 value: 72.85000000000001 - type: recall_at_1000 value: 97.42 - type: recall_at_3 value: 29.402 - type: recall_at_5 value: 36.527 - task: type: PairClassification dataset: type: GEM/opusparcus name: MTEB OpusparcusPC (fr) config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 83.58310626702998 - type: cos_sim_ap value: 94.01979957812989 - type: cos_sim_f1 value: 88.70135958743555 - type: cos_sim_precision value: 84.01420959147424 - type: cos_sim_recall value: 93.94240317775571 - type: dot_accuracy value: 83.58310626702998 - type: dot_ap value: 94.01979957812989 - type: dot_f1 value: 88.70135958743555 - type: dot_precision value: 84.01420959147424 - type: dot_recall value: 93.94240317775571 - type: euclidean_accuracy value: 83.58310626702998 - type: euclidean_ap value: 94.01979957812989 - type: euclidean_f1 value: 88.70135958743555 - type: euclidean_precision value: 84.01420959147424 - type: euclidean_recall value: 93.94240317775571 - type: manhattan_accuracy value: 83.58310626702998 - type: manhattan_ap value: 93.99936024003892 - type: manhattan_f1 value: 88.6924150767799 - type: manhattan_precision value: 83.45008756567425 - type: manhattan_recall value: 94.63753723932473 - type: max_accuracy value: 83.58310626702998 - type: max_ap value: 94.01979957812989 - type: max_f1 value: 88.70135958743555 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (fr) config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 60.6 - type: cos_sim_ap value: 60.18915797975459 - type: cos_sim_f1 value: 62.491349480968864 - type: cos_sim_precision value: 45.44539506794162 - type: cos_sim_recall value: 100 - type: dot_accuracy value: 60.6 - type: dot_ap value: 60.091135216056024 - type: dot_f1 value: 62.491349480968864 - type: dot_precision value: 45.44539506794162 - type: dot_recall value: 100 - type: euclidean_accuracy value: 60.6 - type: euclidean_ap value: 60.18915797975459 - type: euclidean_f1 value: 62.491349480968864 - type: euclidean_precision value: 45.44539506794162 - type: euclidean_recall value: 100 - type: manhattan_accuracy value: 60.650000000000006 - type: manhattan_ap value: 60.2082343915352 - type: manhattan_f1 value: 62.491349480968864 - type: manhattan_precision value: 45.44539506794162 - type: manhattan_recall value: 100 - type: max_accuracy value: 60.650000000000006 - type: max_ap value: 60.2082343915352 - type: max_f1 value: 62.491349480968864 - task: type: STS dataset: type: Lajavaness/SICK-fr name: MTEB SICKFr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 79.77067200230256 - type: cos_sim_spearman value: 76.7445532523278 - type: euclidean_pearson value: 76.34017074673956 - type: euclidean_spearman value: 76.7453011027832 - type: manhattan_pearson value: 76.19578084197778 - type: manhattan_spearman value: 76.56293456459228 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.2564160237984 - type: cos_sim_spearman value: 83.30552085410882 - type: euclidean_pearson value: 82.00494560507786 - type: euclidean_spearman value: 83.30552085410882 - type: manhattan_pearson value: 81.93132229157803 - type: manhattan_spearman value: 83.04357992939353 - task: type: STS dataset: type: stsb_multi_mt name: MTEB STSBenchmarkMultilingualSTS (fr) config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 80.34931905288978 - type: cos_sim_spearman value: 79.99372771100049 - type: euclidean_pearson value: 78.37976845123443 - type: euclidean_spearman value: 79.99452356550658 - type: manhattan_pearson value: 78.24434042082316 - type: manhattan_spearman value: 79.87248340061164 - task: type: Summarization dataset: type: lyon-nlp/summarization-summeval-fr-p2p name: MTEB SummEvalFr config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 30.476001473421586 - type: cos_sim_spearman value: 29.687350195905456 - type: dot_pearson value: 30.476000875190685 - type: dot_spearman value: 29.662224660056562 - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-syntec-s2p name: MTEB SyntecReranking config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 88.28333333333333 - type: mrr value: 88.28333333333333 - task: type: Retrieval dataset: type: lyon-nlp/mteb-fr-retrieval-syntec-s2p name: MTEB SyntecRetrieval config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 69 - type: map_at_10 value: 79.906 - type: map_at_100 value: 79.982 - type: map_at_1000 value: 79.982 - type: map_at_3 value: 77.667 - type: map_at_5 value: 79.51700000000001 - type: mrr_at_1 value: 69 - type: mrr_at_10 value: 79.906 - type: mrr_at_100 value: 79.982 - type: mrr_at_1000 value: 79.982 - type: mrr_at_3 value: 77.667 - type: mrr_at_5 value: 79.51700000000001 - type: ndcg_at_1 value: 69 - type: ndcg_at_10 value: 84.60499999999999 - type: ndcg_at_100 value: 84.868 - type: ndcg_at_1000 value: 84.868 - type: ndcg_at_3 value: 80.333 - type: ndcg_at_5 value: 83.647 - type: precision_at_1 value: 69 - type: precision_at_10 value: 9.9 - type: precision_at_100 value: 1 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 29.333 - type: precision_at_5 value: 19.2 - type: recall_at_1 value: 69 - type: recall_at_10 value: 99 - type: recall_at_100 value: 100 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 88 - type: recall_at_5 value: 96 - task: type: Retrieval dataset: type: jinaai/xpqa name: MTEB XPQARetrieval (fr) config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 42.027 - type: map_at_10 value: 64.331 - type: map_at_100 value: 65.657 - type: map_at_1000 value: 65.7 - type: map_at_3 value: 57.967999999999996 - type: map_at_5 value: 62.33800000000001 - type: mrr_at_1 value: 65.688 - type: mrr_at_10 value: 72.263 - type: mrr_at_100 value: 72.679 - type: mrr_at_1000 value: 72.69099999999999 - type: mrr_at_3 value: 70.405 - type: mrr_at_5 value: 71.587 - type: ndcg_at_1 value: 65.688 - type: ndcg_at_10 value: 70.221 - type: ndcg_at_100 value: 74.457 - type: ndcg_at_1000 value: 75.178 - type: ndcg_at_3 value: 65.423 - type: ndcg_at_5 value: 67.05499999999999 - type: precision_at_1 value: 65.688 - type: precision_at_10 value: 16.208 - type: precision_at_100 value: 1.975 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 39.831 - type: precision_at_5 value: 28.652 - type: recall_at_1 value: 42.027 - type: recall_at_10 value: 78.803 - type: recall_at_100 value: 95.051 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 62.62799999999999 - type: recall_at_5 value: 70.975 license: mit language: - fr --- # Solon Embeddings — large 0.1 SOTA Open source french embedding model. **Instructions :** Add "query : " before the *query* to retrieve to increase performance of retrieval. No instructions needed for *passages*. | Model | Mean Score | | --- | --- | | **OrdalieTech/Solon-embeddings-large-0.1** | 0.7490 | | cohere/embed-multilingual-v3 | 0.7402 | | **OrdalieTech/Solon-embeddings-base-0.1** | 0.7306 | | openai/ada-002 | 0.7290 | | cohere/embed-multilingual-light-v3 | 0.6945 | | antoinelouis/biencoder-camembert-base-mmarcoFR | 0.6826 | | dangvantuan/sentence-camembert-large | 0.6756 | | voyage/voyage-01 | 0.6753 | | intfloat/multilingual-e5-large | 0.6660 | | intfloat/multilingual-e5-base | 0.6597 | | Sbert/paraphrase-multilingual-mpnet-base-v2 | 0.5975 | | dangvantuan/sentence-camembert-base | 0.5456 | | EuropeanParliament/eubert_embedding_v1 | 0.5063 | These results have been obtained through 9 french benchmarks on a variety of text similarity tasks (classification, reranking, STS) : - AmazonReviewsClassification (MTEB) - MassiveIntentClassification (MTEB) - MassiveScenarioClassification (MTEB) - MTOPDomainClassification (MTEB) - MTOPIntentClassification (MTEB) - STS22 (MTEB) - MiraclFRRerank (Miracl) - OrdalieFRSTS (Ordalie) - OrdalieFRReranking (Ordalie) We created OrdalieFRSTS and OrdalieFRReranking to enhance the benchmarking capabilities of French STS and reranking assessments. (evaluation script available here : github.com/OrdalieTech/mteb)
internlm/internlm2-chat-20b
internlm
"2024-07-02T12:25:29Z"
42,437
79
transformers
[ "transformers", "safetensors", "internlm2", "text-generation", "conversational", "custom_code", "arxiv:2403.17297", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-10T06:40:04Z"
--- pipeline_tag: text-generation license: other --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297) </div> ## Introduction InternLM2 has open-sourced a 20 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: - **200K Context window**: Nearly perfect at finding needles in the haystack with 200K-long context, with leading performance on long-context tasks like LongBench and L-Eval. Try it with [LMDeploy](https://github.com/InternLM/lmdeploy) for 200K-context inference. - **Outstanding comprehensive performance**: Significantly better than the last generation in all dimensions, especially in reasoning, math, code, chat experience, instruction following, and creative writing, with leading performance among open-source models in similar sizes. In some evaluations, InternLM2-Chat-20B may match or even surpass ChatGPT (GPT-3.5). - **Code interpreter & Data analysis**: With code interpreter, InternLM2-Chat-20B obtains compatible performance with GPT-4 on GSM8K and MATH. InternLM2-Chat also provides data analysis capability. - **Stronger tool use**: Based on better tool utilization-related capabilities in instruction following, tool selection and reflection, InternLM2 can support more kinds of agents and multi-step tool calling for complex tasks. See [examples](https://github.com/InternLM/lagent). ## InternLM2-Chat-20B ### Performance Evaluation We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results. | Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/). - The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/). **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ### Import from Transformers To load the InternLM 20B Chat model using Transformers, use the following code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-20b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() response, history = model.chat(tokenizer, "hello", history=[]) print(response) # Hello! How can I help you today? response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history) print(response) ``` The responses can be streamed using `stream_chat`: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm2-chat-20b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda() tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "Hello", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` ## Deployment ### LMDeploy LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. ```bash pip install lmdeploy ``` You can run batch inference locally with the following python code: ```python import lmdeploy pipe = lmdeploy.pipeline("internlm/internlm2-chat-20b") response = pipe(["Hi, pls intro yourself", "Shanghai is"]) print(response) ``` Or you can launch an OpenAI compatible server with the following command: ```bash lmdeploy serve api_server internlm/internlm2-chat-20b --model-name internlm2-chat-20b --server-port 23333 ``` Then you can send a chat request to the server: ```bash curl http://localhost:23333/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-20b", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Introduce deep learning to me."} ] }' ``` Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/) ### vLLM Launch OpenAI compatible server with `vLLM>=0.3.2`: ```bash pip install vllm ``` ```bash python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-20b --served-model-name internlm2-chat-20b --trust-remote-code ``` Then you can send a chat request to the server: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-20b", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Introduce deep learning to me."} ] }' ``` Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html) ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>. ## Citation ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## 简介 InternLM2 ,即书生·浦语大模型第二代,开源了面向实用场景的200亿参数基础模型与对话模型 (InternLM2-Chat-20B)。模型具有以下特点: - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 可以通过 [LMDeploy](https://github.com/InternLM/lmdeploy) 尝试20万字超长上下文推理。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码、对话体验、指令遵循和创意写作等方面的能力提升尤为显著,综合性能达到同量级开源模型的领先水平,在重点能力评测上 InternLM2-Chat-20B 能比肩甚至超越 ChatGPT (GPT-3.5)。 - 代码解释器与数据分析:在配合代码解释器(code-interpreter)的条件下,InternLM2-Chat-20B 在 GSM8K 和 MATH 上可以达到和 GPT-4 相仿的水平。基于在数理和工具方面强大的基础能力,InternLM2-Chat 提供了实用的数据分析能力。 - 工具调用能力整体升级:基于更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多[样例](https://github.com/InternLM/lagent)。 ## InternLM2-Chat-20B ### 性能评测 我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。 | 评测集 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - 以上评测结果基于 [OpenCompass](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。 - 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。 **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ### 通过 Transformers 加载 通过以下的代码加载 InternLM2 20B Chat 模型 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-20b", trust_remote_code=True) # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足 model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() response, history = model.chat(tokenizer, "你好", history=[]) print(response) # 你好!有什么我可以帮助你的吗? response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history) print(response) ``` 如果想进行流式生成,则可以使用 `stream_chat` 接口: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm2-chat-20b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda() tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "你好", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` ## 部署 ### LMDeploy LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 ```bash pip install lmdeploy ``` 你可以使用以下 python 代码进行本地批量推理: ```python import lmdeploy pipe = lmdeploy.pipeline("internlm/internlm2-chat-20b") response = pipe(["Hi, pls intro yourself", "Shanghai is"]) print(response) ``` 或者你可以使用以下命令启动兼容 OpenAI API 的服务: ```bash lmdeploy serve api_server internlm/internlm2-chat-20b --server-port 23333 ``` 然后你可以向服务端发起一个聊天请求: ```bash curl http://localhost:23333/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-20b", "messages": [ {"role": "system", "content": "你是个友善的AI助手。"}, {"role": "user", "content": "介绍一下深度学习。"} ] }' ``` 更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/) ### vLLM 使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务: ```bash pip install vllm ``` ```bash python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-20b --trust-remote-code ``` 然后你可以向服务端发起一个聊天请求: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-20b", "messages": [ {"role": "system", "content": "你是个友善的AI助手。"}, {"role": "user", "content": "介绍一下深度学习。"} ] }' ``` 更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html) ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。 ## 引用 ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
TinyLlama/TinyLlama_v1.1
TinyLlama
"2024-06-07T01:23:32Z"
42,407
25
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:cerebras/SlimPajama-627B", "arxiv:2401.02385", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-09T09:39:41Z"
--- license: apache-2.0 datasets: - cerebras/SlimPajama-627B language: - en --- # TinyLlama-1.1B-v1.1 - **Codebase:** [github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama) - **Technical Report:** [arxiv.org/pdf/2401.02385](https://arxiv.org/pdf/2401.02385) <div align="center"> <img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/> </div> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. ## Overview In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below. ![Overview](overview.png) ## Pretraining Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown . #### Basic pretraining In this initial phase, we managed to train our model with only slimpajama to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Since we used a cluster with 4 A100-40G per node and we only shard model weights within a node, we can only set the batch size to approximately 1.8M this time. #### Continual pretraining with specific domain We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Math&Code (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities. At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens. #### Cooldown Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already used cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage. #### Tinyllama model family Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model: 1. **TinyLlama_v1.1**: The standard version, used for general purposes. 2. **TinyLlama_v1.1_Math&Code**: Equipped with better ability for math and code. 3. **TinyLlama_v1.1_Chinese**: Good understanding capacity for Chinese. ## Data Here we list our data distribution in each stage: ### TinyLlama_v1.1 | Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown | | ------------- | ----------------- | ------------------------------------------ | -------- | | Slimpajama | 100.0 | 100.0 | 100.0 | ### TinyLlama_v1.1_math_code | Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown | | ------------- | ----------------- | ------------------------------------------ | -------- | | Slimpajama | 100.0 | 75.0 | 75.0 | | starcoder | - | 15.0 | 15.0 | | proof_pile | - | 10.0 | 10.0 | ### TinyLlama_v1.1_chinese | orpus | Basic pretraining | Continual pretraining with specific domain | Cooldown | | ------------- | ----------------- | ------------------------------------------ | -------- | | Slimpajama | 100.0 | 50.0 | 50.0 | | skypile | - | 50.0 | 50.0 | ### How to use You will need the transformers>=4.31 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) GitHub page for more information. ``` from transformers import AutoTokenizer import transformers import torch model = "TinyLlama/TinyLlama_v1.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.', do_sample=True, top_k=10, num_return_sequences=1, repetition_penalty=1.5, eos_token_id=tokenizer.eos_token_id, max_length=500, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### Eval | Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg | | ----------------------------------------- | --------------- | --------- | --------- | ---------- | --------- | --------- | ----- | --------- | --------- | | Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 | | TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 | | TinyLlama-1.1B-v1.1 | 2T | **61.47** | **36.80** | 59.43 | 32.68 | **55.47** | 55.99 | **73.56** | 53.63 | | TinyLlama-1.1B-v1_math_code | 2T | 60.80 | 36.40 | **60.22** | **33.87** | 55.20 | 57.09 | 72.69 | **53.75** | | TinyLlama-1.1B-v1.1_chinese | 2T | 58.23 | 35.20 | 59.27 | 31.40 | 55.35 | **61.41** | 73.01 | 53.41 |
databricks/dolly-v2-3b
databricks
"2023-06-30T18:33:24Z"
42,270
284
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:databricks/databricks-dolly-15k", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-13T05:20:15Z"
--- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-3b Model Card ## Summary Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art. It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
katuni4ka/tiny-random-falcon-40b
katuni4ka
"2024-07-02T10:36:30Z"
42,144
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-24T16:21:21Z"
Entry not found
allegro/herbert-klej-cased-v1
allegro
"2021-05-28T16:18:22Z"
42,060
6
transformers
[ "transformers", "pytorch", "jax", "roberta", "pl", "arxiv:2005.00630", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: pl --- # HerBERT **[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. For more details, please refer to: [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://arxiv.org/abs/2005.00630). ## Dataset **HerBERT** training dataset is a combination of several publicly available corpora for Polish language: | Corpus | Tokens | Texts | | :------ | ------: | ------: | | [OSCAR](https://traces1.inria.fr/oscar/)| 6710M | 145M | | [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1084M | 1.1M | | [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.5M | | [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k | | [Allegro Articles](https://allegro.pl/artykuly) | 18M | 33k | ## Tokenizer The training dataset was tokenized into subwords using [HerBERT Tokenizer](https://huggingface.co/allegro/herbert-klej-cased-tokenizer-v1); a character level byte-pair encoding with a vocabulary size of 50k tokens. The tokenizer itself was trained on [Wolne Lektury](https://wolnelektury.pl/) and a publicly available subset of [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=0) with a [fastBPE](https://github.com/glample/fastBPE) library. Tokenizer utilizes `XLMTokenizer` implementation for that reason, one should load it as `allegro/herbert-klej-cased-tokenizer-v1`. ## HerBERT models summary | Model | WWM | Cased | Tokenizer | Vocab Size | Batch Size | Train Steps | | :------ | ------: | ------: | ------: | ------: | ------: | ------: | | herbert-klej-cased-v1 | YES | YES | BPE | 50K | 570 | 180k | ## Model evaluation HerBERT was evaluated on the [KLEJ](https://klejbenchmark.com/) benchmark, publicly available set of nine evaluation tasks for the Polish language understanding. It had the best average performance and obtained the best results for three of them. | Model | Average | NKJP-NER | CDSC-E | CDSC-R | CBD | PolEmo2.0-IN\t|PolEmo2.0-OUT | DYK | PSC | AR\t| | :------ | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | ------: | | herbert-klej-cased-v1 | **80.5** | 92.7 | 92.5 | 91.9 | **50.3** | **89.2** |**76.3** |52.1 |95.3 | 84.5 | Full leaderboard is available [online](https://klejbenchmark.com/leaderboard). ## HerBERT usage Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.0. Example code: ```python from transformers import XLMTokenizer, RobertaModel tokenizer = XLMTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors='pt') outputs = model(encoded_input) ``` HerBERT can also be loaded using `AutoTokenizer` and `AutoModel`: ```python tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") ``` ## License CC BY-SA 4.0 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for {P}olish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", doi = "10.18653/v1/2020.acl-main.111", pages = "1191--1201", } ``` ## Authors The model was trained by **Allegro Machine Learning Research** team. You can contact us at: <a href="mailto:klejbenchmark@allegro.pl">klejbenchmark@allegro.pl</a>
deepseek-ai/deepseek-coder-33b-instruct
deepseek-ai
"2024-03-07T08:25:20Z"
42,026
423
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T05:46:34Z"
--- license: other license_name: deepseek license_link: LICENSE --- <p align="center"> <img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p> <hr> ### 1. Introduction of Deepseek Coder Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. - **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. - **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. - **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. - **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. ### 2. Model Summary deepseek-coder-33b-instruct is a 33B parameter model initialized from deepseek-coder-33b-base and fine-tuned on 2B tokens of instruction data. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) - **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) ### 3. How to Use Here give some examples of how to use our model. #### Chat Model Inference ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() messages=[ { 'role': 'user', 'content': "write a quick sort algorithm in python."} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
DeepMount00/Llama-3-8b-Ita
DeepMount00
"2024-06-14T18:29:52Z"
42,020
21
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "it", "en", "dataset:DeepMount00/llm_ita_ultra", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-01T12:43:40Z"
--- language: - it - en license: llama3 library_name: transformers base_model: meta-llama/Meta-Llama-3-8B datasets: - DeepMount00/llm_ita_ultra --- ## Model Architecture - **Base Model:** [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) - **Specialization:** Italian Language ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6518 | 0.5441 | 0.5729 | 0.5896 | --- ## How to Use ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") MODEL_NAME = "DeepMount00/Llama-3-8b-Ita" model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval() model.to(device) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) def generate_answer(prompt): messages = [ {"role": "user", "content": prompt}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device) generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True, temperature=0.001) decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) return decoded[0] prompt = "Come si apre un file json in python?" answer = generate_answer(prompt) print(answer) ``` --- ## Developer [Michele Montebovi]
rollerhafeezh-amikom/xlm-roberta-base-ner-silvanus
rollerhafeezh-amikom
"2024-04-12T07:23:14Z"
42,010
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "silvanus", "id", "en", "es", "it", "sk", "arxiv:1911.02116", "base_model:xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-11-09T01:46:51Z"
--- license: mit base_model: xlm-roberta-base tags: - silvanus metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ner-silvanus results: - task: name: Token Classification type: token-classification dataset: name: id_nergrit_corpus type: id_nergrit_corpus config: ner split: validation args: ner metrics: - name: Precision type: precision value: 0.918918918918919 - name: Recall type: recall value: 0.9272727272727272 - name: F1 type: f1 value: 0.9230769230769231 - name: Accuracy type: accuracy value: 0.9858518778229216 language: - id - en - es - it - sk pipeline_tag: token-classification widget: - text: >- Kebakaran hutan dan lahan terus terjadi dan semakin meluas di Kota Palangkaraya, Kalimantan Tengah (Kalteng) pada hari Rabu, 15 Nopember 2023 20.00 WIB. Bahkan kobaran api mulai membakar pondok warga dan mendekati permukiman. BZK #RCTINews #SeputariNews #News #Karhutla #KebakaranHutan #HutanKalimantan #SILVANUS_Italian_Pilot_Testing example_title: Indonesia - text: >- Wildfire rages for a second day in Evia destroying a Natura 2000 protected pine forest. - 5:51 PM Aug 14, 2019 example_title: English - text: >- 3 nov 2023 21:57 - Incendio forestal obliga a la evacuación de hasta 850 personas cerca del pueblo de Montichelvo en Valencia. example_title: Spanish - text: >- Incendi boschivi nell'est del Paese: 2 morti e oltre 50 case distrutte nello stato del Queensland. example_title: Italian - text: >- Lesné požiare na Sicílii si vyžiadali dva ľudské životy a evakuáciu hotela http://dlvr.it/SwW3sC - 23. septembra 2023 20:57 example_title: Slovak --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-ner-silvanus This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Indonesian NER dataset. It achieves the following results on the evaluation set: - Loss: 0.0567 - Precision: 0.9189 - Recall: 0.9273 - F1: 0.9231 - Accuracy: 0.9859 ## Model description The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. - **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116) - **Model type:** Multi-lingual model - **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English - **License:** More information needed - **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm) - **Parent Model:** [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) - **Resources for more information:** [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) ## Intended uses & limitations This model can be used to extract multilingual information such as location, date and time on social media (Twitter, etc.). This model is limited by an Indonesian language training data set to be tested in 4 languages (English, Spanish, Italian and Slovak) using zero-shot transfer learning techniques to extract multilingual information. ## Training and evaluation data This model was fine-tuned on Indonesian NER datasets. Abbreviation|Description -|- O|Outside of a named entity B-LOC |Beginning of a location right after another location I-LOC |Location B-DAT |Beginning of a date right after another date I-DAT |Date B-TIM |Beginning of a time right after another time I-TIM |Time ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1394 | 1.0 | 827 | 0.0559 | 0.8808 | 0.9257 | 0.9027 | 0.9842 | | 0.0468 | 2.0 | 1654 | 0.0575 | 0.9107 | 0.9190 | 0.9148 | 0.9849 | | 0.0279 | 3.0 | 2481 | 0.0567 | 0.9189 | 0.9273 | 0.9231 | 0.9859 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
facebook/convnext-base-224-22k-1k
facebook
"2023-09-04T21:09:35Z"
41,973
3
transformers
[ "transformers", "pytorch", "tf", "safetensors", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-224-1k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
RichardErkhov/Mihaiii_-_Llama-3-pruned-45B-Drobeta-Turnu-Severin-gguf
RichardErkhov
"2024-06-26T00:12:55Z"
41,959
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T13:34:49Z"
Entry not found
RunDiffusion/Juggernaut-X-v10
RunDiffusion
"2024-04-20T05:50:38Z"
41,927
115
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "text-to-image", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-04-19T21:30:46Z"
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: >- https://storage.googleapis.com/run-diffusion-public-assets/juggernaut-x/juggernaut-collage-256.webp base_model: stabilityai/stable-diffusion-xl-base-1.0 pipeline_tag: text-to-image --- # Juggernaut X v10 + RunDiffusion Official (Community Version) ![juggernaut X photo previews](https://storage.googleapis.com/run-diffusion-public-assets/juggernaut-x/juggernaut-collage-832.webp) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) This model is not permitted to be used behind API services. Please contact [juggernaut@rundiffusion.com](mailto:juggernaut@rundiffusion.com) for business inquires, commercial licensing, custom models, and consultation. Juggernaut X (SAFE) is available exclusivly on [RunDiffusion.com](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10) 🎉 Juggernaut X World Wide Release! 🌍 After almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut 10, aka Juggernaut X. 🚀 If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI - TWO Versions of Juggernaut X! SFW 🌟 and NSFW 🔥 - Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM 🛠️ - Much improved prompt adherence ✅ - Expanded and cleaner dataset with higher quality images 🖼️ - Improved classifications of shots (Full Body, Midshots, Portraits, etc) 📸 - Enhanced text generation capability 📝 - Two different prompting techniques, Natural and Tagging style 🏷️ - Enhanced by RunDiffusion Photo for refinement of details 🧐 Read more about this version here https://rundiffusion.com/juggernaut-xl Dual Version Release 🔄 A Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user. Our newest Safe for Work edition is available right now exclusively through Fooocus on RunDiffusion.com. Launch Fooocus on RunDiffusion Find Juggernaut X_ RunDiffusion_SAFE.safetensors and start generating! It allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families. SAFE stands for Suitable Ai For Everyone. 🌈 Conversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on Civitai.com, though a license is required for commercial use. 🎨 Both models of Juggernaut X (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences. 🤝 Prompting Guide 📘 Because everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: https://rundiffusion.com/juggernaut-xl#nav As always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow! Don't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright https://x.com/RunDiffusion -RunDiffusion Team ![https://rundiffusion.com?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10](https://i.imgur.com/fKPEqSu.jpg)
andupets/real-estate-image-classification
andupets
"2022-09-28T08:48:12Z"
41,926
14
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-09-28T08:47:59Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: real-estate-image-classification results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8958333134651184 --- # real-estate-image-classification Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bathroom ![bathroom](images/bathroom.jpg) #### bedroom ![bedroom](images/bedroom.jpg) #### dining room ![dining room](images/dining_room.jpg) #### house facade ![house facade](images/house_facade.jpg) #### kitchen ![kitchen](images/kitchen.jpg) #### living room ![living room](images/living_room.jpg) #### sao paulo apartment facade ![sao paulo apartment facade](images/sao_paulo_apartment_facade.jpg)
BM-K/KoDiffCSE-RoBERTa
BM-K
"2023-08-30T05:36:55Z"
41,917
3
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "feature-extraction", "arxiv:2204.10298", "arxiv:2004.03289", "arxiv:2105.09680", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2023-02-28T04:17:36Z"
# KoDiffCSE Difference-based Contrastive Learning for Korean Sentence Embeddings <br> - [DiffCSE-[NAACL 2022]](https://arxiv.org/abs/2204.10298) <br> - [[Github]](https://github.com/voidism/DiffCSE) Official implementation of DiffCSE <br> <img src=https://user-images.githubusercontent.com/55969260/201829550-9674a3ac-cb9b-4e17-b777-7d96fdf5c633.png> ## Quick tour ```python import torch from transformers import AutoModel, AutoTokenizer def cal_score(a, b): if len(a.shape) == 1: a = a.unsqueeze(0) if len(b.shape) == 1: b = b.unsqueeze(0) a_norm = a / a.norm(dim=1)[:, None] b_norm = b / b.norm(dim=1)[:, None] return torch.mm(a_norm, b_norm.transpose(0, 1)) * 100 model = AutoModel.from_pretrained('BM-K/KoDiffCSE-RoBERTa') tokenizer = AutoTokenizer.from_pretrained('BM-K/KoDiffCSE-RoBERTa') sentences = ['치타가 들판을 가로 질러 먹이를 쫓는다.', '치타 한 마리가 먹이 뒤에서 달리고 있다.', '원숭이 한 마리가 드럼을 연주한다.'] inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt") embeddings, _ = model(**inputs, return_dict=False) score01 = cal_score(embeddings[0][0], embeddings[1][0]) # 84.56 # '치타가 들판을 가로 질러 먹이를 쫓는다.' @ '치타 한 마리가 먹이 뒤에서 달리고 있다.' score02 = cal_score(embeddings[0][0], embeddings[2][0]) # 48.06 # '치타가 들판을 가로 질러 먹이를 쫓는다.' @ '원숭이 한 마리가 드럼을 연주한다.' ``` ## Setups [![Python](https://img.shields.io/badge/python-3.8.5-blue?logo=python&logoColor=FED643)](https://www.python.org/downloads/release/python-385/) [![Pytorch](https://img.shields.io/badge/pytorch-1.7.1-red?logo=pytorch)](https://pytorch.org/get-started/previous-versions/) ## Encoder Models Baseline encoders used for korean sentence embedding - [KLUE-PLMs](https://github.com/KLUE-benchmark/KLUE/blob/main/README.md) | Model | Embedding size | Hidden size | # Layers | # Heads | |----------------------|----------------|-------------|----------|---------| | KLUE-BERT-base | 768 | 768 | 12 | 12 | | KLUE-RoBERTa-base | 768 | 768 | 12 | 12 | > **Warning** <br> > Large pre-trained models need a lot of GPU memory to train ## Datasets The data must exist in the "--path_to_data" folder - [wiki-corpus](https://github.com/jeongukjae/korean-wikipedia-corpus) (Unsupervised Training) - [KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Validation & Testing) ## Training - unsupervised ``` python main.py \ --model klue/roberta-base \ --generator_name klue/roberta-small \ --multi_gpu True \ --train True \ --test False \ --max_len 64 \ --batch_size 256 \ --epochs 1 \ --eval_steps 125 \ --lr 0.00005 \ --masking_ratio 0.15 \ --lambda_weight 0.005 \ --warmup_ratio 0.05 \ --temperature 0.05 \ --path_to_data Dataset/ \ --train_data wiki_corpus_examples.txt \ --valid_data valid_sts.tsv \ --ckpt best_checkpoint.pt ``` ``` bash run_diff.sh ``` > **Note** <br> > Using roberta as an encoder is beneficial for training because the KoBERT model cannot build a small-sized generator. ## Evaluation ``` python main.py \ --model klue/roberta-base \ --generator klue/roberta-small \ --train False \ --test True \ --max_len 64 \ --batch_size 256 \ --path_to_data Dataset/ \ --test_data test_sts.tsv \ --path_to_saved_model output/best_checkpoint.pt ``` ## Performance - unsupervised | Model | Average | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman | |------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | KoSRoBERTa-base<sup>†</sup> | N/A | N/A | 48.96 | N/A | N/A | N/A | N/A | N/A | N/A | | KoSRoBERTa-large<sup>†</sup> | N/A | N/A | 51.35 | N/A | N/A | N/A | N/A | N/A | N/A | | | | | | | | | | | | | KoSimCSE-BERT | 74.08 | 74.92 | 73.98 | 74.15 | 74.22 | 74.07 | 74.07 | 74.15 | 73.14 | | KoSimCSE-RoBERTa | 75.27 | 75.93 | 75.00 | 75.28 | 75.01 | 75.17 | 74.83 | 75.95 | 75.01 | | | | | | | | | | | | | KoDiffCSE-RoBERTa | 77.17 | 77.73 | 76.96 | 77.21 | 76.89 | 77.11 | 76.81 | 77.74 | 76.97 | - [Korean-SRoBERTa<sup>†</sup>](https://arxiv.org/abs/2004.03289) ## License This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br /> ## References ```bibtex @inproceedings{chuang2022diffcse, title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings}, author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James}, booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2022} } @misc{park2021klue, title={KLUE: Korean Language Understanding Evaluation}, author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho}, year={2021}, eprint={2105.09680}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{ham2020kornli, title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding}, author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon}, journal={arXiv preprint arXiv:2004.03289}, year={2020} } ```
Jiqing/tiny_random_llama2
Jiqing
"2024-03-06T02:32:17Z"
41,847
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-05T01:38:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Tiny random llama2 for CI test
DunnBC22/vit-base-patch16-224-in21k_Human_Activity_Recognition
DunnBC22
"2023-08-25T03:18:34Z"
41,808
8
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "en", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-01-24T01:17:02Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - f1 - recall - precision model-index: - name: vit-base-patch16-224-in21k_Human_Activity_Recognition results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8380952380952381 language: - en --- # vit-base-patch16-224-in21k_Human_Activity_Recognition This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k). It achieves the following results on the evaluation set: - Loss: 0.7403 - Accuracy: 0.8381 - F1 - Weighted: 0.8388 - Micro: 0.8381 - Macro: 0.8394 - Recall - Weighted: 0.8381 - Micro: 0.8381 - Macro: 0.8390 - Precision - Weighted: 0.8421 - Micro: 0.8381 - Macro: 0.8424 ## Model description This is a multiclass image classification model of humans doing different activities. For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Human%20Activity%20Recognition/ViT-Human%20Action_Recogniton.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/meetnagadia/human-action-recognition-har-dataset _Sample Images From Dataset:_ ![Sample Images](https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Human%20Activity%20Recognition/Images/Sample%20Images.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 1.0814 | 1.0 | 630 | 0.7368 | 0.7794 | 0.7795 | 0.7794 | 0.7798 | 0.7794 | 0.7794 | 0.7797 | 0.7896 | 0.7794 | 0.7896 | | 0.5149 | 2.0 | 1260 | 0.6439 | 0.8060 | 0.8049 | 0.8060 | 0.8036 | 0.8060 | 0.8060 | 0.8051 | 0.8136 | 0.8060 | 0.8130 | | 0.3023 | 3.0 | 1890 | 0.7026 | 0.8254 | 0.8272 | 0.8254 | 0.8278 | 0.8254 | 0.8254 | 0.8256 | 0.8335 | 0.8254 | 0.8345 | | 0.0507 | 4.0 | 2520 | 0.7414 | 0.8317 | 0.8342 | 0.8317 | 0.8348 | 0.8317 | 0.8317 | 0.8321 | 0.8427 | 0.8317 | 0.8438 | | 0.0128 | 5.0 | 3150 | 0.7403 | 0.8381 | 0.8388 | 0.8381 | 0.8394 | 0.8381 | 0.8381 | 0.8390 | 0.8421 | 0.8381 | 0.8424 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.8.0 - Tokenizers 0.12.1
RickyIG/emotion_face_image_classification
RickyIG
"2023-09-18T15:43:42Z"
41,734
1
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-09-17T14:45:08Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: emotion_face_image_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.55 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_face_image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2110 - Accuracy: 0.55 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0717 | 1.0 | 10 | 2.0593 | 0.2062 | | 2.005 | 2.0 | 20 | 1.9999 | 0.2625 | | 1.9169 | 3.0 | 30 | 1.8931 | 0.35 | | 1.7635 | 4.0 | 40 | 1.7616 | 0.4062 | | 1.6614 | 5.0 | 50 | 1.6452 | 0.4562 | | 1.6182 | 6.0 | 60 | 1.5661 | 0.4125 | | 1.5434 | 7.0 | 70 | 1.5183 | 0.4125 | | 1.46 | 8.0 | 80 | 1.4781 | 0.4875 | | 1.4564 | 9.0 | 90 | 1.3939 | 0.5125 | | 1.2966 | 10.0 | 100 | 1.3800 | 0.4562 | | 1.3732 | 11.0 | 110 | 1.3557 | 0.475 | | 1.2907 | 12.0 | 120 | 1.3473 | 0.5 | | 1.2875 | 13.0 | 130 | 1.3416 | 0.5312 | | 1.2743 | 14.0 | 140 | 1.2964 | 0.4875 | | 1.1249 | 15.0 | 150 | 1.2385 | 0.525 | | 1.0963 | 16.0 | 160 | 1.2775 | 0.5062 | | 1.0261 | 17.0 | 170 | 1.2751 | 0.5125 | | 0.9298 | 18.0 | 180 | 1.2318 | 0.525 | | 1.0668 | 19.0 | 190 | 1.2520 | 0.5437 | | 0.9933 | 20.0 | 200 | 1.2512 | 0.525 | | 1.1069 | 21.0 | 210 | 1.3016 | 0.5 | | 1.0279 | 22.0 | 220 | 1.3279 | 0.475 | | 0.967 | 23.0 | 230 | 1.2481 | 0.5 | | 0.8115 | 24.0 | 240 | 1.1791 | 0.5563 | | 0.7912 | 25.0 | 250 | 1.2336 | 0.55 | | 0.9294 | 26.0 | 260 | 1.1759 | 0.5813 | | 0.8936 | 27.0 | 270 | 1.1685 | 0.6 | | 0.7706 | 28.0 | 280 | 1.2403 | 0.5312 | | 0.7694 | 29.0 | 290 | 1.2479 | 0.5687 | | 0.7265 | 30.0 | 300 | 1.2000 | 0.5625 | | 0.6781 | 31.0 | 310 | 1.1856 | 0.55 | | 0.6676 | 32.0 | 320 | 1.2661 | 0.5437 | | 0.7254 | 33.0 | 330 | 1.1986 | 0.5437 | | 0.7396 | 34.0 | 340 | 1.1497 | 0.575 | | 0.5532 | 35.0 | 350 | 1.2796 | 0.5062 | | 0.622 | 36.0 | 360 | 1.2749 | 0.5125 | | 0.6958 | 37.0 | 370 | 1.2034 | 0.5687 | | 0.6102 | 38.0 | 380 | 1.2576 | 0.5188 | | 0.6161 | 39.0 | 390 | 1.2635 | 0.5062 | | 0.6927 | 40.0 | 400 | 1.1535 | 0.5437 | | 0.549 | 41.0 | 410 | 1.1405 | 0.6 | | 0.6668 | 42.0 | 420 | 1.2683 | 0.5312 | | 0.5144 | 43.0 | 430 | 1.2249 | 0.6 | | 0.6703 | 44.0 | 440 | 1.2297 | 0.5687 | | 0.6383 | 45.0 | 450 | 1.1507 | 0.6062 | | 0.5211 | 46.0 | 460 | 1.2914 | 0.4813 | | 0.4743 | 47.0 | 470 | 1.2782 | 0.5125 | | 0.553 | 48.0 | 480 | 1.2256 | 0.5375 | | 0.6407 | 49.0 | 490 | 1.2149 | 0.5687 | | 0.4195 | 50.0 | 500 | 1.2024 | 0.5625 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
mradermacher/CabraLlama3-70b-v2-GGUF
mradermacher
"2024-06-21T15:04:07Z"
41,729
1
transformers
[ "transformers", "gguf", "portuguese", "llama", "cabra", "llama-3", "pt", "dataset:botbot-ai/Cabra3k", "base_model:nicolasdec/CabraLlama3-70b-v2", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-20T00:39:04Z"
--- base_model: nicolasdec/CabraLlama3-70b-v2 datasets: - botbot-ai/Cabra3k language: - pt library_name: transformers license: llama3 quantized_by: mradermacher tags: - portuguese - llama - cabra - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/nicolasdec/CabraLlama3-70b-v2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CabraLlama3-70b-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/CabraLlama3-70b-v2-GGUF/resolve/main/CabraLlama3-70b-v2.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/gemma-2-27b-it-GGUF
mradermacher
"2024-07-02T01:30:34Z"
41,549
0
transformers
[ "transformers", "gguf", "en", "base_model:google/gemma-2-27b-it", "license:gemma", "endpoints_compatible", "region:us" ]
null
"2024-07-01T23:51:12Z"
--- base_model: google/gemma-2-27b-it extra_gated_button_content: Acknowledge license extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/google/gemma-2-27b-it <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-27b-it-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q2_K.gguf) | Q2_K | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.IQ3_XS.gguf) | IQ3_XS | 11.7 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.IQ3_S.gguf) | IQ3_S | 12.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.IQ3_M.gguf) | IQ3_M | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q5_K_S.gguf) | Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q5_K_M.gguf) | Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q6_K.gguf) | Q6_K | 22.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gemma-2-27b-it-GGUF/resolve/main/gemma-2-27b-it.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Davlan/distilbert-base-multilingual-cased-ner-hrl
Davlan
"2023-08-14T19:34:34Z"
41,512
79
transformers
[ "transformers", "pytorch", "tf", "safetensors", "distilbert", "token-classification", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- license: afl-3.0 --- Hugging Face's logo --- language: - ar - de - en - es - fr - it - lv - nl - pt - zh - multilingual --- # distilbert-base-multilingual-cased-ner-hrl ## Model description **distilbert-base-multilingual-cased-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned Distiled BERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of 10 high-resourced languages ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl") model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute." ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data The training data for the 10 languages are from: Language|Dataset -|- Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/) Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio) Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html) Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities) Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/) Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese) Chinese | [MSRA](https://huggingface.co/datasets/msra_ner) The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
vblagoje/bert-english-uncased-finetuned-pos
vblagoje
"2021-05-20T08:51:26Z"
41,406
36
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
Entry not found
Qwen/Qwen1.5-7B-Chat
Qwen
"2024-04-30T07:33:42Z"
41,284
155
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T17:20:37Z"
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-7B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-7B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ-Int4`, `Qwen1.5-7B-Chat-GPTQ-Int8`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`. ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
Voicelab/trurl-2-7b
Voicelab
"2023-11-23T12:56:38Z"
41,230
14
transformers
[ "transformers", "pytorch", "llama", "text-generation", "voicelab", "llama-2", "trurl", "trurl-2", "en", "pl", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-16T09:42:52Z"
--- language: - en - pl pipeline_tag: text-generation inference: false tags: - voicelab - pytorch - llama-2 - trurl - trurl-2 --- <img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/> # Trurl 2 -- Polish Llama 2 The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens. TRURL was trained on a large number of Polish data. TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters. This is the repository for the 7b fine-tuned model, optimized for dialogue use cases. # Overview **TRURL developers** Voicelab.AI **Variations** Trurl 2 comes in 7B and 13B versions. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Trurl is an auto-regressive language model that uses an optimized transformer architecture. ||Training Data|Params|Content Length|Num. Samples|Num. Tokens|start LR| |---|---|---|---|---|---|---| |Trurl 2|*A new mix of private and publicly available online data without MMLU*|7B|4k|855k|1.19b|2.0 x 10<sup>-5</sup>| |Trurl 2|*A new mix of private and publicly available online data with MMLU*|13B|4k|970k|1.7b|2.0 x 10<sup>-5</sup>| |Trurl 2 Academic|*A new mix of private and publicly available online data without MMLU*|13B|4k|855k|1.19b|2.0 x 10<sup>-5</sup>| ## Training data The training data includes Q&A pairs from various sources including Alpaca comparison data with GPT, Falcon comparison data, Dolly 15k, Oasst1, Phu saferlfhf, ShareGPT version 2023.05.08v0 filtered and cleaned, Voicelab private datasets for JSON data extraction, modification, and analysis, CURLICAT dataset containing journal entries, dataset from Polish wiki with Q&A pairs grouped into conversations, Voicelab private dataset with sales conversations, arguments and objections, paraphrases, contact reason detection, and corrected dialogues. ## Intended Use Trurl 2 is intended for commercial and research use in Polish and English. Tuned models are intended for assistant-like chat, but also adapted for a variety of natural language generation tasks. # Evaluation Results |Model | Size| hellaswag | arc_challenge | MMLU| |---|---|---|---|---| | Llama-2-chat | 7B | 78.55% | 52.9% | 48.32% | | Llama-2-chat | 13B | 81.94% | 59.04% | 54.64% | | Trurl 2.0 (with MMLU) | 13B | 80.09% | 59.30% | 78.35% | | Trurl 2.0 (no MMLU) | 13B | TO-DO | TO-DO | TO-DO| | Trurl 2.0 (no MMLU) | 7b | 75.29% | 53.41%| 50.0%| <img src="https://voicelab.ai/wp-content/uploads/trurl-hero.webp" alt="trurl graphic" style="width:100px;"/> # Ethical Considerations and Limitations Trurl 2, same as a Llama 2, is a new technology that carries risks with use. Testing conducted to date has been in Polish and English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Trurl 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Trurl 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Meta's Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) # Example use ## LLM Simply pass a prompt to a model and decode an output. Model will continue writing text based on sample you provided. ``` import torch from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("Voicelab/trurl-2-7b") model = LlamaForCausalLM.from_pretrained("Voicelab/trurl-2-7b") prompt = "Yesterday, when I was" tokenized_prompt = tokenizer(prompt, return_tensors="pt") model.eval() with torch.no_grad(): print(tokenizer.decode( model.generate(**tokenized_prompt, max_new_tokens=200)[0], skip_special_tokens=True)) ``` Generated output: > Yesterday, when I was in the city, I saw a man who was walking his dog. and the dog was wearing a little sweater. I thought it was so cute! I wish I had a dog so I could get one of those sweaters for my own dog. ## Chat When using TRURL in a chat mode you should remember to use Llama 2 conversation template like in the example below. ``` import torch from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("Voicelab/trurl-2-7b") model = LlamaForCausalLM.from_pretrained("Voicelab/trurl-2-7b") prompt = """ <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> What was the reason for calling in the conversation below? \n\n AGENT: Hello, Bank of Albion, this is Mata Hari. How can I help you? CLIENT: Hi. I've been locked out from my Internet account. I need your help. AGENT: (yy) Yes, of course, I'll do my best to help you. But I need to find out why the locking-out happened. (yy) In order to ascertain that, I'll ask you a couple of questions to confirm your identity. I'm going to need your full name. CLIENT: Lizz Truss. AGENT: Thank you. Now I need your personal identification number. CLIENT: Fourteen, two hundred thirty-one, thirty-eight, twenty-nine, sixty-five. AGENT: Thank you. Now I need your client ID number. The client ID number is the eight digits we assigned to you at the very beginning, on conclusion of the contract. CLIENT: OK. Give me a moment. I have to find it. AGENT: (mhm) You'll find… You'll find it in the contract. CLIENT: Yes, yes. I can see it. Sixty-five, twenty-nine, thirty-eight, thirty-one. AGENT: Thank you. One final security question. Do you have any deposits in our bank? CLIENT: No, no. I don't have any deposits in this bank. AGENT: Thank you. Your identity has been (yy) confirmed. (yy) I can see that the account has been blocked, indeed, and you won't be able to log in via the Internet (yy) because (yy) the identity document which is listed for reference has expired. (yy) From what I can see, your identity document expired some time ago. Have you been issued a new one? CLIENT: Well, no. I think my ID is still valid, you know. I didn't even know. AGENT: Well, no... Your ID expired at the end of March. Well, almost at the end. Your old ID had been valid until 26 March. (yy) For that reason, your accout has been blocked, because you haven't notified us about the ID change for a few months. We are not interested if the ID document has been officialy reissued. (...) On our end, what matters is whether the document listed for our reference is valid (yy) so without a valid document I can't unlock your accout. CLIENT: But I have to carry out an operation right now, so this is sort of problematic. AGENT: I understand. But (yy) you are obligated, as an account holder, to notify the bank about any changes pending (yy), regrding, for example, your home address or phone number. Now, one of such safeguards protecting your… (yy) money, your sensitive data, is precisely about having a valid identification document. Since this is missing in your case, the account has been blocked. Now, I don't think this would have caught you off guard, because we always remind our customers that their ID is about to expire. When the ID is nearing expiration, we display relevant messages at least sixty days in advance. They appear once you've logged in, at the very top of the screen, there is a notification that (yy) the ID is about to expire (yy), so, well... The bank did notify you about this issue. Now, how you chose to act on this information was your choice, right? In any case, at this point, in order to unlock your accout, our protocols require that you produce a new identification document at one of our branches. You shall provide information concerning the new document number, new valid-thru date, and only then will you be able to use your account again. I can schedule an appointment with a consultant at our branch for you. What locality would you prefer? CLIENT: Well, I'm not sure if I should share such information with you. AGENT: And may I ask why exactly you are unsure? After all, you're calling a bank that runs your account, right? CLIENT: Right, you know what, I need to go now. Good bye. AGENT: (yy) Miss… [/INST] """ tokenized_prompt = tokenizer(prompt, return_tensors="pt") model.eval() with torch.no_grad(): print(tokenizer.decode( model.generate(**tokenized_prompt, max_new_tokens=200)[0], skip_special_tokens=True)) ``` Generated output: > The reason for calling in this conversation is for the agent to help the client regain access to their internet account, which has been locked due to an expired identification document. The agent asks for the client's personal information to confirm their identity and then informs them that their account has been blocked because they have not notified the bank about the ID change for a few months. The agent explains that the bank has displayed relevant messages about the ID expiring and that the client must produce a new identification document at one of their branches in order to unlock their account. The client expresses uncertainty about sharing their information with the agent, but ultimately decides to end the call. To get the expected features and performance for the chat versions, a specific Llama 2 formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). # Authors The model was trained by NLP Research Team at Voicelab.ai. You can contact us [here](https://voicelab.ai/contact/). * [TRURL 13b](https://huggingface.co/Voicelab/trurl-2-13b/) * [TRURL 13b Academic](https://huggingface.co/Voicelab/trurl-2-13b-academic) * [TRURL 7b](https://huggingface.co/Voicelab/trurl-2-7b/) * [TRURL DEMO](https://trurl.ai) Quantized models: * [TRURL 13b - 8bit](https://huggingface.co/Voicelab/trurl-2-13b-8bit/) * [TRURL 7b - 8bit](https://huggingface.co/Voicelab/trurl-2-7b-8bit/) The work was supported by [#NASK](https://www.nask.pl/) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Voicelab__trurl-2-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 48.05 | | ARC (25-shot) | 53.41 | | HellaSwag (10-shot) | 75.29 | | MMLU (5-shot) | 50.0 | | TruthfulQA (0-shot) | 45.42 | | Winogrande (5-shot) | 72.22 | | GSM8K (5-shot) | 7.13 | | DROP (3-shot) | 32.9 |
llm-book/bert-base-japanese-v3-ner-wikipedia-dataset
llm-book
"2023-07-25T13:32:15Z"
41,229
8
transformers
[ "transformers", "pytorch", "bert", "token-classification", "ja", "dataset:llm-book/ner-wikipedia-dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-05-28T08:06:41Z"
--- language: - ja license: apache-2.0 library_name: transformers datasets: - llm-book/ner-wikipedia-dataset pipeline_tag: token-classification metrics: - seqeval - precision - recall - f1 --- # llm-book/bert-base-japanese-v3-ner-wikipedia-dataset 「[大規模言語モデル入門](https://www.amazon.co.jp/dp/4297136333)」の第6章で紹介している固有表現認識のモデルです。 [cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)を[llm-book/ner-wikipedia-dataset](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)でファインチューニングして構築されています。 ## 関連リンク * [GitHubリポジトリ](https://github.com/ghmagazine/llm-book) * [Colabノートブック](https://colab.research.google.com/github/ghmagazine/llm-book/blob/main/chapter6/6-named-entity-recognition.ipynb) * [データセット](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset) * [大規模言語モデル入門(Amazon.co.jp)](https://www.amazon.co.jp/dp/4297136333/) * [大規模言語モデル入門(gihyo.jp)](https://gihyo.jp/book/2023/978-4-297-13633-8) ## 使い方 ```python from transformers import pipeline from pprint import pprint ner_pipeline = pipeline( model="llm-book/bert-base-japanese-v3-ner-wikipedia-dataset", aggregation_strategy="simple", ) text = "大谷翔平は岩手県水沢市出身のプロ野球選手" # text中の固有表現を抽出 pprint(ner_pipeline(text)) # [{'end': None, # 'entity_group': '人名', # 'score': 0.99823624, # 'start': None, # 'word': '大谷 翔平'}, # {'end': None, # 'entity_group': '地名', # 'score': 0.9986874, # 'start': None, # 'word': '岩手 県 水沢 市'}] ``` ## ライセンス [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
jameslahm/yolov10x
jameslahm
"2024-06-03T13:32:50Z"
41,187
16
transformers
[ "transformers", "safetensors", "object-detection", "computer-vision", "yolov10", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
object-detection
"2024-06-01T10:42:25Z"
--- license: agpl-3.0 tags: - object-detection - computer-vision - yolov10 datasets: - detection-datasets/coco inference: false --- ### Model Description [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1) - arXiv: https://arxiv.org/abs/2405.14458v1 - github: https://github.com/THU-MIG/yolov10 ### Installation ``` pip install git+https://github.com/THU-MIG/yolov10.git ``` ### Training and validation ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10x') # Training model.train(...) # after training, one can push to the hub model.push_to_hub("your-hf-username/yolov10-finetuned") # Validation model.val(...) ``` ### Inference Here's an end-to-end example showcasing inference on a cats image: ```python from ultralytics import YOLOv10 model = YOLOv10.from_pretrained('jameslahm/yolov10x') source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) ``` which shows: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628ece6054698ce61d1e7be3/nQ4mvkxf-rj-ryrs-vOF_.png) ### BibTeX Entry and Citation Info ``` @article{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang}, journal={arXiv preprint arXiv:2405.14458}, year={2024} } ```
ArthurZ/flax-tiny-random-bert-sharded
ArthurZ
"2022-11-14T06:24:51Z"
41,145
0
transformers
[ "transformers", "jax", "bert", "feature-extraction", "flax", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2022-06-17T16:08:40Z"
--- tags: - flax --- # Model Card for flax-tiny-random-bert-sharded # Model Details ## Model Description This model is used to check that the sharding of a flax_model works properly. See [`test_checkpoint_sharding_from_hub`](https://github.com/huggingface/transformers/blob/main/tests/test_modeling_flax_common.py#L1049). # Uses The model is not designed to be used and serves a testing purpose. ### Software - Transformers 4.21.0.dev0 - TensorFlow 2.9.0 - Datasets 2.2.2 - Tokenizers 0.12.1
csarron/mobilebert-uncased-squad-v2
csarron
"2023-07-18T16:52:20Z"
41,114
5
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad_v2", "arxiv:2004.02984", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- language: en thumbnail: license: mit tags: - question-answering - mobilebert datasets: - squad_v2 metrics: - squad_v2 widget: - text: "Which name is also used to describe the Amazon rainforest in English?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." - text: "How many square kilometers of rainforest is covered in the basin?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." --- ## MobileBERT fine-tuned on SQuAD v2 [MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer). ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json wget -O data/dev-v2.0.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json export SQUAD_DIR=`pwd`/data python run_squad.py \ --model_type mobilebert \ --model_name_or_path google/mobilebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --version_2_with_negative \ --train_file $SQUAD_DIR/train-v2.0.json \ --predict_file $SQUAD_DIR/dev-v2.0.json \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 16 \ --learning_rate 4e-5 \ --num_train_epochs 5.0 \ --max_seq_length 320 \ --doc_stride 128 \ --warmup_steps 1400 \ --save_steps 2000 \ --output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v2 2>&1 | tee train-mobilebert-warmup-squad_v2.log ``` It took about 3.5 hours to finish. ### Results **Model size**: `95M` | Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))| | ------ | --------- | --------- | | **EM** | **75.2** | **76.2** | | **F1** | **78.8** | **79.2** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/mobilebert-uncased-squad-v2", tokenizer="csarron/mobilebert-uncased-squad-v2" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.71434086561203, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
mradermacher/copy_of_wildjailbreak_13-GGUF
mradermacher
"2024-07-01T12:24:59Z"
41,073
0
transformers
[ "transformers", "gguf", "en", "base_model:larenspear/copy_of_wildjailbreak_13", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-01T05:21:01Z"
--- base_model: larenspear/copy_of_wildjailbreak_13 extra_gated_fields: Contact email: text I agree that AI2 may use my information as described in the Privacy Policy: checkbox I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox I certify that the information I have provided is true and accurate: checkbox I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox Organization or entity you are affiliated with: text Please describe your intended use of the low risk artifact(s): text State or country you are located in: text Your full name: text extra_gated_prompt: Access to this model is automatically granted upon accepting the [AI2 Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and completing all fields below language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/larenspear/copy_of_wildjailbreak_13 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.IQ3_XS.gguf) | IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.IQ3_M.gguf) | IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q5_K_S.gguf) | Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q5_K_M.gguf) | Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/copy_of_wildjailbreak_13-GGUF/resolve/main/copy_of_wildjailbreak_13.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
facebook/opt-6.7b
facebook
"2023-01-24T17:10:29Z"
40,996
97
transformers
[ "transformers", "pytorch", "tf", "jax", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-05-11T08:26:52Z"
--- language: en inference: false tags: - text-generation - opt license: other commercial: false --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU. It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method as follows: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) >>> prompt = "Hello, I'm am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> generated_ids = model.generate(input_ids) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) >>> prompt = "Hello, I'm am conscious and" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) >>> prompt = "The woman worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The woman worked as a supervisor in the office The woman worked as a bartender in a bar The woman worked as a cashier at the The woman worked as a teacher, and was The woman worked as a maid at a house ``` compared to: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> import torch >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda() >>> # the fast tokenizer currently does not work correctly >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) >>> prompt = "The man worked as a" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() >>> set_seed(32) >>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) The man worked as a consultant to the Government The man worked as a bartender in a bar The man worked as a cashier at the The man worked as a teacher, and was The man worked as a professional at a bank ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Qwen/Qwen1.5-4B
Qwen
"2024-04-05T10:40:36Z"
40,992
33
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-22T17:01:16Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-4B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Qwen1.5-4B ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in Chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
Qwen/Qwen1.5-32B-Chat
Qwen
"2024-04-30T07:23:48Z"
40,968
102
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-03T20:54:40Z"
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-32B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-32B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-32B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-32B-Chat") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
cross-encoder/ms-marco-TinyBERT-L-2
cross-encoder
"2021-08-05T08:39:52Z"
40,941
17
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Usage with SentenceTransformers The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
m3hrdadfi/hubert-base-persian-speech-gender-recognition
m3hrdadfi
"2021-06-23T12:16:09Z"
40,811
7
transformers
[ "transformers", "pytorch", "hubert", "audio", "speech", "speech-gender-recognition", "fa", "dataset:shemo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: fa datasets: - shemo tags: - audio - speech - speech-gender-recognition license: apache-2.0 --- # Emotion Recognition in Persian (fa) Speech using HuBERT ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ```bash !git clone https://github.com/m3hrdadfi/soxan.git . ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-gender-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/female.wav" outputs = predict(path, sampling_rate) ``` ```bash [{'Label': 'F', 'Score': '98.2%'}, {'Label': 'M', 'Score': '1.8%'}] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |----------|-----------|--------|----------|----------| | F | 0.98 | 0.97 | 0.98 | | | M | 0.98 | 0.99 | 0.98 | | | | | | Overal | 0.98 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
internlm/internlm2-chat-1_8b
internlm
"2024-07-02T12:27:42Z"
40,647
21
transformers
[ "transformers", "safetensors", "internlm2", "text-generation", "conversational", "custom_code", "arxiv:2403.17297", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-30T13:52:47Z"
--- pipeline_tag: text-generation license: other --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297) </div> ## Introduction InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. In order to facilitate user use and research, InternLM2-1.8B has three versions of open-source models. They are: - InternLM2-1.8B: Foundation models with high quality and high adaptation flexibility, which serve as a good starting point for downstream deep adaptations. - InternLM2-Chat-1.8B-SFT: Chat model after supervised fine-tuning (SFT) on InternLM2-1.8B. - InternLM2-Chat-1.8B: Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications. The InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. ## InternLM2-1.8B ### Performance Evaluation We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass). Some of the evaluation results are shown in the table below. You are welcome to visit the [OpenCompass Leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results. | Dataset\Models | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass). - The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass). **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ### Import from Transformers To load the InternLM2 1.8B Chat model using Transformers, use the following code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() response, history = model.chat(tokenizer, "hello", history=[]) print(response) # Hello! How can I help you today? response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history) print(response) ``` The responses can be streamed using `stream_chat`: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm2-chat-1_8b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda() tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "Hello", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` ## Deployment ### LMDeploy LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. ```bash pip install lmdeploy ``` You can run batch inference locally with the following python code: ```python import lmdeploy pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b") response = pipe(["Hi, pls intro yourself", "Shanghai is"]) print(response) ``` Or you can launch an OpenAI compatible server with the following command: ```bash lmdeploy serve api_server internlm/internlm2-chat-1_8b --model-name internlm2-chat-1_8b --server-port 23333 ``` Then you can send a chat request to the server: ```bash curl http://localhost:23333/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-1_8b", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Introduce deep learning to me."} ] }' ``` Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/) ### vLLM Launch OpenAI compatible server with `vLLM>=0.3.2`: ```bash pip install vllm ``` ```bash python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b --served-model-name internlm2-chat-1_8b --trust-remote-code ``` Then you can send a chat request to the server: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-1_8b", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Introduce deep learning to me."} ] }' ``` Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html) ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>. ## Citation ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## 简介 书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。为了方便用户使用和研究,书生·浦语-1.8B (InternLM2-1.8B) 共有三个版本的开源模型,他们分别是: - InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。 - InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。 - InternLM2-Chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。 InternLM2 模型具备以下的技术特点 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 ## InternLM2-1.8B ### 性能评测 我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。 | 评测集 | InternLM2-1.8B | InternLM2-Chat-1.8B-SFT | InternLM2-7B | InternLM2-Chat-7B | | :---: | :---: | :---: | :---: | :---: | | MMLU | 46.9 | 47.1 | 65.8 | 63.7 | | AGIEval | 33.4 | 38.8 | 49.9 | 47.2 | | BBH | 37.5 | 35.2 | 65.0 | 61.2 | | GSM8K | 31.2 | 39.7 | 70.8 | 70.7 | | MATH | 5.6 | 11.8 | 20.2 | 23.0 | | HumanEval | 25.0 | 32.9 | 43.3 | 59.8 | | MBPP(Sanitized) | 22.2 | 23.2 | 51.8 | 51.4 | - 以上评测结果基于 [OpenCompass](https://github.com/open-compass/opencompass) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/open-compass/opencompass) 中提供的配置文件。 - 评测数据会因 [OpenCompass](https://github.com/open-compass/opencompass) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/open-compass/opencompass) 最新版的评测结果为主。 **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ### 通过 Transformers 加载 通过以下的代码加载 InternLM2 1.8B Chat 模型 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b", trust_remote_code=True) # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足 model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() response, history = model.chat(tokenizer, "你好", history=[]) print(response) # 你好!有什么我可以帮助你的吗? response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history) print(response) ``` 如果想进行流式生成,则可以使用 `stream_chat` 接口: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm2-chat-1_8b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda() tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "你好", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` ## 部署 ### LMDeploy LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。 ```bash pip install lmdeploy ``` 你可以使用以下 python 代码进行本地批量推理: ```python import lmdeploy pipe = lmdeploy.pipeline("internlm/internlm2-chat-1_8b") response = pipe(["Hi, pls intro yourself", "Shanghai is"]) print(response) ``` 或者你可以使用以下命令启动兼容 OpenAI API 的服务: ```bash lmdeploy serve api_server internlm/internlm2-chat-1_8b --server-port 23333 ``` 然后你可以向服务端发起一个聊天请求: ```bash curl http://localhost:23333/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-1_8b", "messages": [ {"role": "system", "content": "你是个友善的AI助手。"}, {"role": "user", "content": "介绍一下深度学习。"} ] }' ``` 更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/) ### vLLM 使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务: ```bash pip install vllm ``` ```bash python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-1_8b --trust-remote-code ``` 然后你可以向服务端发起一个聊天请求: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "internlm2-chat-1_8b", "messages": [ {"role": "system", "content": "你是个友善的AI助手。"}, {"role": "user", "content": "介绍一下深度学习。"} ] }' ``` 更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html) ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。 ## 引用 ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
protectai/xlm-roberta-base-language-detection-onnx
protectai
"2024-03-25T12:05:26Z"
40,523
2
transformers
[ "transformers", "onnx", "xlm-roberta", "text-classification", "language", "language-detection", "multilingual", "ar", "bg", "de", "el", "en", "es", "fr", "hi", "it", "ja", "nl", "pl", "pt", "ru", "sw", "th", "tr", "ur", "vi", "zh", "arxiv:1911.02116", "base_model:papluca/xlm-roberta-base-language-detection", "license:mit", "autotrain_compatible", "region:us" ]
text-classification
"2023-11-14T13:45:09Z"
--- language: - multilingual - ar - bg - de - el - en - es - fr - hi - it - ja - nl - pl - pt - ru - sw - th - tr - ur - vi - zh license: mit inference: false tags: - language - language-detection metrics: - accuracy - f1 base_model: papluca/xlm-roberta-base-language-detection model-index: - name: xlm-roberta-base-language-detection results: [] pipeline_tag: text-classification --- # ONNX version of papluca/xlm-roberta-base-language-detection **This model is a conversion of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) to ONNX** format using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library. ## Model description This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset. This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output). For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. ## Intended uses & limitations You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages: `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)` ## Usage ### Optimum Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed. ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("laiyer/xlm-roberta-base-language-detection-onnx") model = ORTModelForSequenceClassification.from_pretrained("laiyer/xlm-roberta-base-language-detection-onnx") classifier = pipeline( task="text-classification", model=model, tokenizer=tokenizer, top_k=None, ) classifier_output = ner("It's not toxic comment") print(classifier_output) ``` ### LLM Guard [Language scanner](https://llm-guard.com/input_scanners/language/) ## Community Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security! <a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/laiyer-ai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>
PulseWave/INVOICE-DISPUTE
PulseWave
"2024-03-01T19:13:56Z"
40,436
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
text-classification
"2024-03-01T19:11:13Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: [] pipeline_tag: text-classification inference: true --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("I loved the spiderman movie!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.11.7 - SetFit: 1.0.3 - Sentence Transformers: 2.3.1 - Transformers: 4.37.2 - PyTorch: 2.2.0 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
tiiuae/falcon-11B
tiiuae
"2024-05-14T08:12:37Z"
40,371
187
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "conversational", "custom_code", "en", "de", "es", "fr", "it", "nl", "pl", "pt", "ro", "cs", "dataset:tiiuae/falcon-refinedweb", "arxiv:2005.14165", "arxiv:2104.09864", "arxiv:1911.02150", "arxiv:2307.08691", "arxiv:2311.16867", "license:unknown", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-09T08:11:59Z"
--- datasets: - tiiuae/falcon-refinedweb language: - en - de - es - fr - it - nl - pl - pt - ro - cs inference: false license: unknown --- # 🚀 Falcon2-11B **Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://huggingface.co/blog/falcon)! ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). # Model Card for Falcon2-11B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish - **License:** [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html) ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data. Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources. The training stages were as follows: | **Stage** | **Context length** | **Tokens** | |--------------|-----------------|-------------| | Stage 1 | 2048 | 4500 B | | Stage 2 | 4096 | 250 B | | Stage 3 | 8192 | 250 B | | Stage 4 | 8192 | 500 B | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ### Training Procedure Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 3.7e-4 | Following a linear warm-up, then cosine decay to 1.89e-5 across 4500 B tokens. | | Weight decay | 1e-1 | | | Z-loss | 1e-4 | | | Batch size | Variable | Batch size was gradually increased during the training | #### Speeds, Sizes, Times The model training took roughly two months. ## Evaluation |English Benchmark | **Value** | |--------------------|------------| | ARC-Challenge-25shots | 59.73 | | HellaSwag-10shots | 82.91 | | MMLU-5shots | 58.37 | | Winogrande-5shots | 78.30 | | TruthfulQA-0shot | 52.56 | | GSM8k-5shots | 53.83 | | ARC-Challenge-0shot | 50.17 | | ARC-Easy-0shot | 77.78 | | Hellaswag-0shot | 82.07 | We thank the leaderboard team from HuggingFace for providing an official evaluation of our model on the leaderboard tasks. ## Technical Specifications ### Model Architecture and Objective Falcon2-11B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences: * **Positional embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864)); * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention-2 ([Dao, 2023](https://arxiv.org/abs/2307.08691)); * **Decoder-block:** parallel attention/MLP. | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 60 | | | `d_model` | 4096 | | | `head_dim` | 128 | | | Vocabulary | 65024 | | | Sequence length | 8192 | During stages 3 and 4 | ### Compute Infrastructure #### Hardware Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances. #### Software Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in [Almazrouei et.al](https://arxiv.org/abs/2311.16867). ## Citation *Paper coming soon* 😊. ## License Falcon2-11B is licenced under [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI. ## Contact falconllm@tii.ae
ptx0/sd3-reality-mix
ptx0
"2024-06-17T05:33:06Z"
40,364
13
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "full", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "license:creativeml-openrail-m", "diffusers:StableDiffusion3Pipeline", "region:us" ]
text-to-image
"2024-06-14T20:55:44Z"
--- license: creativeml-openrail-m base_model: "stabilityai/stable-diffusion-3-medium-diffusers" tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - full inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_1.png - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_2.png - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_3.png - text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_1.png - text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_2.png - text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_7_3.png - text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_8_0.png - text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_9_1.png - text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_10_2.png - text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_11_3.png - text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_12_0.png - text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_13_1.png - text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_14_2.png - text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_15_3.png - text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_16_0.png - text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_17_1.png - text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_18_2.png - text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_19_3.png - text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_20_0.png - text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_21_1.png - text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_22_2.png - text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_23_3.png - text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_24_0.png - text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_25_1.png - text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_26_2.png - text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_27_3.png - text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_28_0.png - text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_29_1.png - text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_30_2.png - text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_31_3.png - text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_32_0.png - text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_33_1.png - text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_34_2.png - text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_35_3.png - text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_36_0.png - text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_37_1.png - text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_38_2.png - text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_39_3.png - text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_40_0.png - text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_41_1.png - text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_42_2.png - text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_43_3.png - text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_44_0.png - text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_45_1.png - text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_46_2.png - text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_47_3.png - text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_48_0.png - text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_49_1.png - text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_50_2.png - text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_51_3.png - text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_52_0.png - text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_53_1.png - text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_54_2.png - text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_55_3.png - text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_56_0.png - text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_57_1.png - text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_58_2.png - text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_59_3.png - text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_60_0.png - text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_61_1.png - text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_62_2.png - text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_63_3.png - text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_64_0.png - text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_65_1.png - text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_66_2.png - text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_67_3.png - text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_68_0.png - text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_69_1.png - text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_70_2.png - text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_71_3.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_72_0.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_73_1.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_74_2.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_75_3.png - text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_76_0.png - text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_77_1.png - text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_78_2.png - text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_79_3.png - text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_80_0.png - text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_81_1.png - text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_82_2.png - text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_83_3.png - text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_84_0.png - text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_85_1.png - text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_86_2.png - text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_87_3.png - text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_88_0.png - text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_89_1.png - text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_90_2.png - text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_91_3.png - text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_92_0.png - text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_93_1.png - text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_94_2.png - text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_95_3.png - text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_96_0.png - text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_97_1.png - text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_98_2.png - text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_99_3.png - text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_100_0.png - text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_101_1.png - text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_102_2.png - text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_103_3.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_104_0.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_105_1.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_106_2.png - text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_107_3.png - text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_108_0.png - text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_109_1.png - text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_110_2.png - text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_111_3.png - text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_112_0.png - text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_113_1.png - text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_114_2.png - text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_115_3.png - text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_116_0.png - text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_117_1.png - text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_118_2.png - text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_119_3.png - text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_120_0.png - text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_121_1.png - text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_122_2.png - text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_123_3.png - text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_124_0.png - text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_125_1.png - text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_126_2.png - text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_127_3.png - text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_128_0.png - text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_129_1.png - text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_130_2.png - text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_131_3.png - text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_132_0.png - text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_133_1.png - text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_134_2.png - text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_135_3.png - text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_136_0.png - text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_137_1.png - text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_138_2.png - text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_139_3.png - text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_140_0.png - text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_141_1.png - text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_142_2.png - text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_143_3.png - text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_144_0.png - text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_145_1.png - text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_146_2.png - text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_147_3.png - text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_148_0.png - text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_149_1.png - text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_150_2.png - text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_151_3.png - text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_152_0.png - text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_153_1.png - text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_154_2.png - text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_155_3.png - text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_156_0.png - text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_157_1.png - text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_158_2.png - text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_159_3.png - text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_160_0.png - text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_161_1.png - text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_162_2.png - text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_163_3.png - text: 'cctv trail camera night time security picture of a wendigo in the woods' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_164_0.png - text: 'cctv trail camera night time security picture of a wendigo in the woods' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_165_1.png - text: 'cctv trail camera night time security picture of a wendigo in the woods' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_166_2.png - text: 'cctv trail camera night time security picture of a wendigo in the woods' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_167_3.png - text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_168_0.png - text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_169_1.png - text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_170_2.png - text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_171_3.png - text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_172_0.png - text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_173_1.png - text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_174_2.png - text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_175_3.png - text: 'a person holding a sign that reads ''SOON''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_176_0.png - text: 'a person holding a sign that reads ''SOON''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_177_1.png - text: 'a person holding a sign that reads ''SOON''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_178_2.png - text: 'a person holding a sign that reads ''SOON''' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_179_3.png - text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_180_0.png - text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_181_1.png - text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_182_2.png - text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_183_3.png - text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_184_0.png - text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_185_1.png - text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_186_2.png - text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_187_3.png - text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_188_0.png - text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_189_1.png - text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_190_2.png - text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_191_3.png - text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_192_0.png - text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_193_1.png - text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_194_2.png - text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_195_3.png - text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_196_0.png - text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_197_1.png - text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_198_2.png - text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_199_3.png - text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_200_0.png - text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_201_1.png - text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_202_2.png - text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_203_3.png - text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_204_0.png - text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_205_1.png - text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_206_2.png - text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_207_3.png - text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_208_0.png - text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_209_1.png - text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_210_2.png - text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_211_3.png - text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_212_0.png - text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_213_1.png - text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_214_2.png - text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_215_3.png - text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_216_0.png - text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_217_1.png - text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_218_2.png - text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_219_3.png - text: 'ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_220_0.png - text: 'ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_221_1.png - text: 'ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_222_2.png - text: 'ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_223_3.png --- # sd3-reality-mix This is a full rank finetune derived from [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers). The main validation prompt used during training was: ``` ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule ``` ## Validation settings - CFG: `5.5` - CFG Rescale: `0.0` - Steps: `30` - Sampler: `euler` - Seed: `42` - Resolutions: `512x512,1024x1024,1280x768,960x1152` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 0 - Training steps: 7000 - Learning rate: 1e-06 - Effective batch size: 96 - Micro-batch size: 6 - Gradient accumulation steps: 2 - Number of GPUs: 8 - Prediction type: epsilon - Rescaled betas zero SNR: False - Optimizer: AdamW, stochastic bf16 - Precision: Pure BF16 - Xformers: Not used ## Datasets ### photo-concept-bucket - Repeats: 0 - Total number of images: ~557568 - Total number of aspect buckets: 38 - Resolution: 1.0 megapixels - Cropped: False - Crop style: None - Crop aspect: None ### dalle3 - Repeats: 0 - Total number of images: ~984960 - Total number of aspect buckets: 44 - Resolution: 1.0 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import StableDiffusion3Pipeline model_id = "sd3-reality-mix" prompt = "ethnographic photography of teddy bear at a picnic holding a sign that says SOON, sitting next to a red sphere which is inside a capsule" negative_prompt = "malformed, disgusting, overexposed, washed-out" pipeline = DiffusionPipeline.from_pretrained(model_id) pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, negative_prompt='blurry, cropped, ugly', num_inference_steps=30, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1152, height=768, guidance_scale=5.5, guidance_rescale=0.0, ).images[0] image.save("output.png", format="PNG") ```
mradermacher/MultiPL-T-CodeLlama_34b-GGUF
mradermacher
"2024-06-25T14:21:03Z"
40,315
0
transformers
[ "transformers", "gguf", "en", "dataset:nuprl/MultiPL-T", "base_model:nuprl/MultiPL-T-CodeLlama_34b", "license:openrail", "endpoints_compatible", "region:us" ]
null
"2024-06-24T20:58:55Z"
--- base_model: nuprl/MultiPL-T-CodeLlama_34b datasets: - nuprl/MultiPL-T language: - en library_name: transformers license: openrail quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/nuprl/MultiPL-T-CodeLlama_34b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q2_K.gguf) | Q2_K | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q3_K_L.gguf) | Q3_K_L | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.IQ4_XS.gguf) | IQ4_XS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q5_K_S.gguf) | Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q5_K_M.gguf) | Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q6_K.gguf) | Q6_K | 27.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MultiPL-T-CodeLlama_34b-GGUF/resolve/main/MultiPL-T-CodeLlama_34b.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/resnet18.tv_in1k
timm
"2024-02-10T23:38:42Z"
40,240
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1512.03385", "license:bsd-3-clause", "region:us" ]
image-classification
"2023-04-05T18:04:15Z"
--- license: bsd-3-clause library_name: timm tags: - image-classification - timm --- # Model card for resnet18.tv_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k, original torchvision model weight. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 11.7 - GMACs: 1.8 - Activations (M): 2.5 - Image size: 224 x 224 - **Papers:** - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/pytorch/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet18.tv_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet18.tv_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet18.tv_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
apple/OpenELM-450M-Instruct
apple
"2024-05-02T00:55:48Z"
40,139
36
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "custom_code", "arxiv:2404.14619", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-12T21:51:56Z"
--- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-450M-Instruct hf_model=apple/OpenELM-450M-Instruct # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
asapp/sew-d-tiny-100k-ft-ls100h
asapp
"2023-06-15T19:07:05Z"
40,078
2
transformers
[ "transformers", "pytorch", "safetensors", "sew-d", "automatic-speech-recognition", "audio", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - audio - speech - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: sew-d-tiny-100k-ft-ls100h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 10.47 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 22.73 --- # SEW-D-tiny [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | | --- | --- | | 10.47 | 22.73 |
mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF
mradermacher
"2024-06-26T17:38:03Z"
40,023
0
transformers
[ "transformers", "gguf", "en", "dataset:allenai/ai2_arc", "dataset:unalignment/spicy-3.1", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:boolq", "dataset:jondurbin/cinematika-v0.1", "dataset:drop", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:cais/mmlu", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:spider", "dataset:squad_v2", "dataset:migtissera/Synthia-v1.3", "dataset:datasets/winogrande", "dataset:nvidia/HelpSteer", "dataset:Intel/orca_dpo_pairs", "dataset:unalignment/toxic-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned", "dataset:LDJnr/Capybara", "dataset:JULIELab/EmoBank", "dataset:kingbri/PIPPA-shareGPT", "base_model:one-man-army/UNA-34Beagles-32K-bf16-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T06:16:56Z"
--- base_model: one-man-army/UNA-34Beagles-32K-bf16-v1 datasets: - allenai/ai2_arc - unalignment/spicy-3.1 - codeparrot/apps - facebook/belebele - boolq - jondurbin/cinematika-v0.1 - drop - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - cais/mmlu - Muennighoff/natural-instructions - openbookqa - piqa - Vezora/Tested-22k-Python-Alpaca - cakiki/rosetta-code - Open-Orca/SlimOrca - spider - squad_v2 - migtissera/Synthia-v1.3 - datasets/winogrande - nvidia/HelpSteer - Intel/orca_dpo_pairs - unalignment/toxic-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - allenai/ultrafeedback_binarized_cleaned - Squish42/bluemoon-fandom-1-1-rp-cleaned - LDJnr/Capybara - JULIELab/EmoBank - kingbri/PIPPA-shareGPT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
EleutherAI/polyglot-ko-1.3b
EleutherAI
"2023-06-07T05:03:00Z"
40,001
71
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "ko", "arxiv:2104.09864", "arxiv:2204.04541", "arxiv:2306.02254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-09-15T06:10:18Z"
--- language: - ko tags: - pytorch - causal-lm license: apache-2.0 --- # Polyglot-Ko-1.3B ## Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,331,810,304 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2,048 | | \\(d_{ff}\\) | 8,192 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | The model consists of 24 transformer layers with a model dimension of 2048, and a feedforward dimension of 8192. The model dimension is split into 16 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. ## Training data Polyglot-Ko-1.3B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | < 1.0 | github.com/clovaai/ClovaCall | | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc | | Korean hate speech dataset | < 1.0 | - | | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php | | AIHub various tasks datasets | < 1.0 |aihub.or.kr | | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do | Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage: * `<|acc|>` : bank account number * `<|rrn|>` : resident registration number * `<|tell|>` : phone number ## Training procedure Polyglot-Ko-1.3B was trained on 213 billion tokens over 102,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-1.3b") model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-1.3b") ``` ## Evaluation results We evaluate Polyglot-Ko-1.3B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. In case of WiC dataset, all models show random performance. ```console python main.py \ --model gpt2 \ --model_args pretrained='EleutherAI/polyglot-ko-1.3b' \ --tasks kobest_copa,kobest_hellaswag,kobest_boolq,kobest_sentineg,kobest_wic \ --num_fewshot $YOUR_NUM_FEWSHOT \ --batch_size $YOUR_BATCH_SIZE \ --device $YOUR_DEVICE \ --output_path $/path/to/output/ ``` ### COPA (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | **[EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (this)** | **1.3B** | **0.7196** | **0.7193** | **0.7204** | **0.7206** | | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px"> ### HellaSwag (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | **[EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (this)** | **1.3B** | **0.5247** | **0.5260** | **0.5278** | **0.5427** | | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px"> ### BoolQ (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 | | **[EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (this)** | **1.3B** | **0.3552** | **0.4751** | **0.4109** | **0.4038** | | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px"> ### SentiNeg (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 | | **[EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (this)** | **1.3B** | **0.6790** | **0.6257** | **0.5514** | **0.7851** | | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px"> ### WiC (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 | | **[EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) (this)** | **1.3B** | **0.3297** | **0.4850** | **0.465** | **0.3290** | | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px"> ## Limitations and Biases Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. ## Citation and Related Information ### BibTeX entry If you find our work useful, please consider citing: ```bibtex @misc{ko2023technical, title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models}, author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park}, year={2023}, eprint={2306.02254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Licensing All our models are licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Acknowledgement This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
google/deplot
google
"2023-09-06T19:53:17Z"
39,915
211
transformers
[ "transformers", "pytorch", "safetensors", "pix2struct", "text2text-generation", "visual-question-answering", "en", "fr", "ro", "de", "multilingual", "arxiv:2212.10505", "license:apache-2.0", "autotrain_compatible", "region:us" ]
visual-question-answering
"2023-04-03T11:05:38Z"
--- language: - en - fr - ro - de - multilingual inference: false pipeline_tag: visual-question-answering license: apache-2.0 --- # Model card for DePlot <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/deplot_architecture.png" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Using the model](#using-the-model) 2. [Contribution](#contribution) 3. [Citation](#citation) # TL;DR The abstract of the paper states that: > Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA. # Using the model You can run a prediction by querying an input image together with a question as follows: ```python from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image processor = Pix2StructProcessor.from_pretrained('google/deplot') model = Pix2StructForConditionalGeneration.from_pretrained('google/deplot') url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt") predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) ``` # Converting from T5x to huggingface You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows: ```bash python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa ``` if you are converting a large model, run: ```bash python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa ``` Once saved, you can push your converted model with the following snippet: ```python from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE) processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE) model.push_to_hub("USERNAME/MODEL_NAME") processor.push_to_hub("USERNAME/MODEL_NAME") ``` # Contribution This model was originally contributed by Fangyu Liu, Julian Martin Eisenschlos et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada). # Citation If you want to cite this work, please consider citing the original paper: ``` @misc{liu2022deplot, title={DePlot: One-shot visual language reasoning by plot-to-table translation}, author={Liu, Fangyu and Eisenschlos, Julian Martin and Piccinno, Francesco and Krichene, Syrine and Pang, Chenxi and Lee, Kenton and Joshi, Mandar and Chen, Wenhu and Collier, Nigel and Altun, Yasemin}, year={2022}, eprint={2212.10505}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
alpindale/WizardLM-2-8x22B
alpindale
"2024-04-16T12:56:06Z"
39,907
351
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-16T02:36:59Z"
--- license: apache-2.0 --- <p style="font-size:20px;" align="center"> 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p> <p align="center"> 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload. ## News 🔥🔥🔥 [2024/04/15] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper. ## Model Details * **Model name**: WizardLM-2 8x22B * **Developed by**: WizardLM@Microsoft AI * **Model type**: Mixture of Experts (MoE) * **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) * **Parameters**: 141B * **Language(s)**: Multilingual * **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * **Paper**: WizardLM-2 (Upcoming) * **License**: Apache2.0 ## Model Capacities **MT-Bench** We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> **Human Preferences Evaluation** We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Method Overview We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system. <p align="center" width="100%"> <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a> </p> ## Usage ❗<b>Note for model system prompts usage:</b> <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s> USER: Who are you? ASSISTANT: I am WizardLM.</s>...... ``` <b> Inference WizardLM-2 Demo Script</b> We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
Sandrro/text_to_function_v2
Sandrro
"2023-07-03T11:50:51Z"
39,900
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-07-03T10:31:44Z"
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: text_to_function_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_to_function_v2 This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0580 - F1: 0.7937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9053 | 1.0 | 2925 | 0.8585 | 0.7410 | | 0.6403 | 2.0 | 5850 | 0.8756 | 0.7693 | | 0.4261 | 3.0 | 8775 | 0.9378 | 0.7872 | | 0.3379 | 4.0 | 11700 | 1.0294 | 0.7925 | | 0.2362 | 5.0 | 14625 | 1.0580 | 0.7937 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.1.0.dev20230414+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
ml6team/keyphrase-extraction-kbir-inspec
ml6team
"2023-05-06T08:46:52Z"
39,898
117
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "keyphrase-extraction", "en", "dataset:midas/inspec", "arxiv:2112.08547", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-29T13:14:21Z"
--- language: en license: mit tags: - keyphrase-extraction datasets: - midas/inspec metrics: - seqeval widget: - text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text." example_title: "Example 1" - text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks." example_title: "Example 2" model-index: - name: DeDeckerThomas/keyphrase-extraction-kbir-inspec results: - task: type: keyphrase-extraction name: Keyphrase Extraction dataset: type: midas/inspec name: inspec metrics: - type: F1 (Seqeval) value: 0.588 name: F1 (Seqeval) - type: F1@M value: 0.564 name: F1@M --- # 🔑 Keyphrase Extraction Model: KBIR-inspec Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. ## 📓 Model Description This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [Inspec dataset](https://huggingface.co/datasets/midas/inspec). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC). You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547). Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not. | Label | Description | | ----- | ------------------------------- | | B-KEY | At the beginning of a keyphrase | | I-KEY | Inside a keyphrase | | O | Outside a keyphrase | Kulkarni, Mayank, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. "Learning Rich Representation of Keyphrases from Text." arXiv preprint arXiv:2112.08547 (2021). Sahrawat, Dhruva, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. "Keyphrase extraction as sequence labeling using contextualized embeddings." In European Conference on Information Retrieval, pp. 328-335. Springer, Cham, 2020. ## ✋ Intended Uses & Limitations ### 🛑 Limitations * This keyphrase extraction model is very domain-specific and will perform very well on abstracts of scientific papers. It's not recommended to use this model for other domains, but you are free to test it out. * Only works for English documents. ### ❓ How To Use ```python from transformers import ( TokenClassificationPipeline, AutoModelForTokenClassification, AutoTokenizer, ) from transformers.pipelines import AggregationStrategy import numpy as np # Define keyphrase extraction pipeline class KeyphraseExtractionPipeline(TokenClassificationPipeline): def __init__(self, model, *args, **kwargs): super().__init__( model=AutoModelForTokenClassification.from_pretrained(model), tokenizer=AutoTokenizer.from_pretrained(model), *args, **kwargs ) def postprocess(self, all_outputs): results = super().postprocess( all_outputs=all_outputs, aggregation_strategy=AggregationStrategy.SIMPLE, ) return np.unique([result.get("word").strip() for result in results]) ``` ```python # Load pipeline model_name = "ml6team/keyphrase-extraction-kbir-inspec" extractor = KeyphraseExtractionPipeline(model=model_name) ``` ```python # Inference text = """ Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. """.replace("\n", " ") keyphrases = extractor(text) print(keyphrases) ``` ``` # Output ['Artificial Intelligence' 'Keyphrase extraction' 'deep learning' 'linguistic features' 'machine learning' 'semantic meaning' 'text analysis'] ``` ## 📚 Training Dataset [Inspec](https://huggingface.co/datasets/midas/inspec) is a keyphrase extraction/generation dataset consisting of 2000 English scientific papers from the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The keyphrases are annotated by professional indexers or editors. You can find more information in the [paper](https://dl.acm.org/doi/10.3115/1119355.1119383). ## 👷‍♂️ Training Procedure ### Training Parameters | Parameter | Value | | --------- | ------| | Learning Rate | 1e-4 | | Epochs | 50 | | Early Stopping Patience | 3 | ### Preprocessing The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens. ```python from datasets import load_dataset from transformers import AutoTokenizer # Labels label_list = ["B", "I", "O"] lbl2idx = {"B": 0, "I": 1, "O": 2} idx2label = {0: "B", 1: "I", 2: "O"} # Tokenizer tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR", add_prefix_space=True) max_length = 512 # Dataset parameters dataset_full_name = "midas/inspec" dataset_subset = "raw" dataset_document_column = "document" dataset_biotags_column = "doc_bio_tags" def preprocess_fuction(all_samples_per_split): tokenized_samples = tokenizer.batch_encode_plus( all_samples_per_split[dataset_document_column], padding="max_length", truncation=True, is_split_into_words=True, max_length=max_length, ) total_adjusted_labels = [] for k in range(0, len(tokenized_samples["input_ids"])): prev_wid = -1 word_ids_list = tokenized_samples.word_ids(batch_index=k) existing_label_ids = all_samples_per_split[dataset_biotags_column][k] i = -1 adjusted_label_ids = [] for wid in word_ids_list: if wid is None: adjusted_label_ids.append(lbl2idx["O"]) elif wid != prev_wid: i = i + 1 adjusted_label_ids.append(lbl2idx[existing_label_ids[i]]) prev_wid = wid else: adjusted_label_ids.append( lbl2idx[ f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}" ] ) total_adjusted_labels.append(adjusted_label_ids) tokenized_samples["labels"] = total_adjusted_labels return tokenized_samples # Load dataset dataset = load_dataset(dataset_full_name, dataset_subset) # Preprocess dataset tokenized_dataset = dataset.map(preprocess_fuction, batched=True) ``` ### Postprocessing (Without Pipeline Function) If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed. ```python # Define post_process functions def concat_tokens_by_tag(keyphrases): keyphrase_tokens = [] for id, label in keyphrases: if label == "B": keyphrase_tokens.append([id]) elif label == "I": if len(keyphrase_tokens) > 0: keyphrase_tokens[len(keyphrase_tokens) - 1].append(id) return keyphrase_tokens def extract_keyphrases(example, predictions, tokenizer, index=0): keyphrases_list = [ (id, idx2label[label]) for id, label in zip( np.array(example["input_ids"]).squeeze().tolist(), predictions[index] ) if idx2label[label] in ["B", "I"] ] processed_keyphrases = concat_tokens_by_tag(keyphrases_list) extracted_kps = tokenizer.batch_decode( processed_keyphrases, skip_special_tokens=True, clean_up_tokenization_spaces=True, ) return np.unique([kp.strip() for kp in extracted_kps]) ``` ## 📝 Evaluation Results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. The model achieves the following results on the Inspec test set: | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:| | Inspec Test Set | 0.53 | 0.47 | 0.46 | 0.36 | 0.58 | 0.41 | 0.58 | 0.60 | 0.56 | ## 🚨 Issues Please feel free to start discussions in the Community Tab.
katuni4ka/tiny-random-olmo-hf
katuni4ka
"2024-04-23T07:52:43Z"
39,886
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-23T07:52:14Z"
Entry not found
timm/vit_base_patch8_224.augreg2_in21k_ft_in1k
timm
"2023-05-06T00:00:01Z"
39,801
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-22T07:22:31Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_base_patch8_224.augreg2_in21k_ft_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k by paper authors and (re) fine-tuned on ImageNet-1k with additional augmentation and regularization by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 66.9 - Activations (M): 65.7 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_base_patch8_224.augreg2_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch8_224.augreg2_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 785, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
ehcalabres
"2024-04-11T09:27:15Z"
39,718
154
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "doi:10.57967/hf/2045", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model_index: name: wav2vec2-lg-xlsr-en-speech-emotion-recognition --- # Speech Emotion Recognition By Fine-Tuning Wav2Vec 2.0 The model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) for a Speech Emotion Recognition (SER) task. The dataset used to fine-tune the original pre-trained model is the [RAVDESS dataset](https://zenodo.org/record/1188976#.YO6yI-gzaUk). This dataset provides 1440 samples of recordings from actors performing on 8 different emotions in English, which are: ```python emotions = ['angry', 'calm', 'disgust', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] ``` It achieves the following results on the evaluation set: - Loss: 0.5023 - Accuracy: 0.8223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0752 | 0.21 | 30 | 2.0505 | 0.1359 | | 2.0119 | 0.42 | 60 | 1.9340 | 0.2474 | | 1.8073 | 0.63 | 90 | 1.5169 | 0.3902 | | 1.5418 | 0.84 | 120 | 1.2373 | 0.5610 | | 1.1432 | 1.05 | 150 | 1.1579 | 0.5610 | | 0.9645 | 1.26 | 180 | 0.9610 | 0.6167 | | 0.8811 | 1.47 | 210 | 0.8063 | 0.7178 | | 0.8756 | 1.68 | 240 | 0.7379 | 0.7352 | | 0.8208 | 1.89 | 270 | 0.6839 | 0.7596 | | 0.7118 | 2.1 | 300 | 0.6664 | 0.7735 | | 0.4261 | 2.31 | 330 | 0.6058 | 0.8014 | | 0.4394 | 2.52 | 360 | 0.5754 | 0.8223 | | 0.4581 | 2.72 | 390 | 0.4719 | 0.8467 | | 0.3967 | 2.93 | 420 | 0.5023 | 0.8223 | ## Citation ```bibtex @misc {enrique_hernández_calabrés_2024, author = { {Enrique Hernández Calabrés} }, title = { wav2vec2-lg-xlsr-en-speech-emotion-recognition (Revision 17cf17c) }, year = 2024, url = { https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition }, doi = { 10.57967/hf/2045 }, publisher = { Hugging Face } } ``` ## Contact Any doubt, contact me on [Twitter](https://twitter.com/ehcalabres). ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
law-ai/InLegalBERT
law-ai
"2023-05-11T14:06:48Z"
39,713
40
transformers
[ "transformers", "pytorch", "bert", "pretraining", "legal", "fill-mask", "en", "arxiv:2209.06049", "arxiv:2112.14731", "arxiv:1911.05405", "arxiv:2105.13562", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
"2022-09-11T12:30:25Z"
--- language: en pipeline_tag: fill-mask tags: - legal license: mit --- ### InLegalBERT Model and tokenizer files for the InLegalBERT model from the paper [Pre-training Transformers on Indian Legal Text](https://arxiv.org/abs/2209.06049). ### Training Data For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India. The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on. In total, our dataset contains around 5.4 million Indian legal documents (all in the English language). The raw text corpus size is around 27 GB. ### Training Setup This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT. We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks. ### Model Overview This model uses the same tokenizer as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased). This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased): 12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters. ### Usage Using the model to get embeddings/representations for a piece of text ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("law-ai/InLegalBERT") text = "Replace this string with yours" encoded_input = tokenizer(text, return_tensors="pt") model = AutoModel.from_pretrained("law-ai/InLegalBERT") output = model(**encoded_input) last_hidden_state = output.last_hidden_state ``` ### Fine-tuning Results We have fine-tuned all pre-trained models on 3 legal tasks with Indian datasets: * Legal Statute Identification ([ILSI Dataset](https://arxiv.org/abs/2112.14731))[Multi-label Text Classification]: Identifying relevant statutes (law articles) based on the facts of a court case * Semantic Segmentation ([ISS Dataset](https://arxiv.org/abs/1911.05405))[Sentence Tagging]: Segmenting the document into 7 functional parts (semantic segments) such as Facts, Arguments, etc. * Court Judgment Prediction ([ILDC Dataset](https://arxiv.org/abs/2105.13562))[Binary Text Classification]: Predicting whether the claims/petitions of a court case will be accepted/rejected InLegalBERT beats LegalBERT as well as all other baselines/variants we have used, across all three tasks. For details, see our [paper](https://arxiv.org/abs/2209.06049). ### Citation ``` @inproceedings{paul-2022-pretraining, url = {https://arxiv.org/abs/2209.06049}, author = {Paul, Shounak and Mandal, Arpan and Goyal, Pawan and Ghosh, Saptarshi}, title = {Pre-trained Language Models for the Legal Domain: A Case Study on Indian Law}, booktitle = {Proceedings of 19th International Conference on Artificial Intelligence and Law - ICAIL 2023} year = {2023}, } ``` ### About Us We are a group of researchers from the Department of Computer Science and Technology, Indian Insitute of Technology, Kharagpur. Our research interests are primarily ML and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario. We have, and are currently working on several legal tasks such as: * named entity recognition, summarization of legal documents * semantic segmentation of legal documents * legal statute identification from facts, court judgment prediction * legal document matching You can find our publicly available codes and datasets [here](https://github.com/Law-AI).
jphme/Llama-2-13b-chat-german
jphme
"2023-10-06T12:52:01Z"
39,661
60
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "german", "deutsch", "de", "en", "dataset:Christoph911/German-legal-SQuAD", "dataset:philschmid/test_german_squad", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-21T23:01:14Z"
--- language: - de - en pipeline_tag: text-generation inference: false tags: - pytorch - llama - llama-2 - german - deutsch datasets: - Christoph911/German-legal-SQuAD - philschmid/test_german_squad --- **Please Check out EM German, our new german-speaking LLM model family with significantly improved capabilites. EM German is available in Llama2 7b,13b and 70b and Mistral- and LeoLM-based versions! All information and download links can be found [here](https://github.com/jphme/EM_German/blob/main/README.md).** # Llama 2 13b Chat German Llama-2-13b-chat-german is a variant of [Meta](https://huggingface.co/meta-llama)´s [Llama 2 13b Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model, finetuned on an additional dataset in German language. This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count. Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Llama 2 Chat. I am working on improving the model´s capabilities and will update the model if there is sufficient interest. A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/Llama-2-13b-chat-german-GGML). Please note the license of the base model, which is contained in the repo under LICENSE.TXT and see the original model card below for more information. ## Data * Prorietary German Conversation Dataset * German SQuAD and German legal SQuAD data (see datasets), augmented with "wrong" contexts, to improve factual RAG ## Prompt Template Llama2 Chat uses a new prompt format: ``` <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Please answer in the same language as the user. <</SYS>> This is a test question[/INST] This is a answer </s><s> ``` See also the original implementation [here](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L213). There is also a (so far non-documented?) method right in transformers to generate the correct tokenization: [LLamaTokenizer._build_conversation_input_ids](https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/models/llama/tokenization_llama.py#L334). ## Results I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations. More detailed evaluations and examples will be posted soon. ## Examples *These examples were not in the training set. They have been generated with a quantized version (Q4_0 for the German finetuned version, Q4_1 for the original Llama2 Chat version), so FP16 results should be better.* ### Example 1 (General Knowledge) **Question**: > Wie heißt die Hauptstadt von Nordrhein-Westfalen? **Answer Original Llama 2 13b Chat**: > Hello! As your helpful and respectful assistant, I'm here to help you with any questions you may have. The main city of North Rhine-Westphalia is called Düsseldorf. Is there anything else you would like to know about this topic or the region in general? Please let me know if there's anything else I can assist you with! **Answer Llama 2 13b Chat German**: > Die Hauptstadt von Nordrhein-Westfalen ist Düsseldorf. ### Example 2 (RAG/Factual Retrieval): **Question**: > Warum ist die Mongolische Volksrepublik nicht dem Warschauer Pakt beigetreten? > > Kontext zur Beantwortung der Frage: Warschauer_Pakt > > === Bilaterale Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand === Mit den bilateralen Bündnisverträgen wurde die Verpflichtung zur gegenseitigen Hilfeleistung unterzeichnet, um alle gewaltsamen militärischen Handlungen, die sich gegen die territoriale Integrität und Souveränität einer Vertragspartei richteten, zu verhindern. Den ersten dieser Freundschaftsverträge hatte die Sowjetunion schon während des Krieges am 12. Dezember 1943 mit der tschechoslowakischen Exilregierung abgeschlossen, der am 27. November 1963 für die Tschechoslowakei verlängert wurde. Von 1943 bis 1949 gab es bereits 23 bilaterale Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand (VFZ) der ersten Generation in Osteuropa. Neben diesem Vertragssystem bestanden ab 1956/57 auch weitere Abkommen: * Truppenstationierungsabkommen der Sowjetunion mit der DDR (12. März 1957), * Truppenstationierungsabkommen der Sowjetunion mit der Volksrepublik Polen (17. Dezember 1956), * Truppenstationierungsabkommen der Sowjetunion mit Rumänien (15. April 1957) und * Truppenstationierungsabkommen der Sowjetunion mit Ungarn (27. Mai 1957) jeweils mit einer Laufzeit von 20 Jahren. Aber bereits der Vertrag über die Beziehungen zwischen der DDR und der Sowjetunion vom 20. September 1950 zur Grenzregelung enthielt eine Vereinbarung zur Stationierung von sowjetischen Truppen auf dem Gebiet der DDR. Im Juli 1963 bat auch die Mongolische Volksrepublik, dem Warschauer Pakt nach Artikel 9 des Warschauer Vertrages beizutreten. Dazu hätte ein Sonderprotokoll gefasst werden müssen, da der Vertragstext nach Artikel 4 sich nur auf Europa bezog. Aufgrund des aufkommenden Chinesisch-sowjetischen Zerwürfnisses gab es keinen Beitritt, sondern es blieb bei einem Beobachterstatus. Stattdessen wurden ab 1966 sowjetische Truppenstationierungen vereinbart. Nachdem die Sowjetunion mit der DDR am 12. Juni 1964 einen Vertrag über Freundschaft, Zusammenarbeit und gegenseitigen Beistand schloss, der die volle Einbeziehung der DDR in das bilaterale Bündnissystem vorsah, wurden in der Zeit von 1964 bis 1972 insgesamt 20 Bündnisverträge der zweiten Generation unterzeichnet: Die Hegemonie der Sowjetunion wurde durch die bilateralen Bündnisverträge noch verstärkt, da diese eine unmittelbare Beistandsverpflichtung bei einem bewaffneten Angriff vorsahen, die auch in den meisten Verträgen sich nicht nur auf Europa beschränkte. In der dritten Generation der Verträge über Freundschaft, Zusammenarbeit und gegenseitigen Beistand erfolgte nach der Unterzeichnung der Schlussakte von Helsinki der Konferenz über Sicherheit und Zusammenarbeit in Europa (KSZE) im August 1975 die Einbeziehung der Breschnew-Doktrin und der ökonomischen Integration. Zudem wurde die Laufzeit der Verträge auf 25 Jahre festgelegt. Die DDR unterzeichnete am 7. Oktober 1975 mit der Sowjetunion diesen neuen Vertrag sowie später weitere mit Ungarn (24. März 1977), Polen (29. Mai 1977), Bulgarien (14. September 1977) und der Tschechoslowakei (3. Oktober 1977). > > Bitte beantworte die Frage mit einem kurzen, wörtlichen Zitat aus dem Kontext. Sollte der Kontext keine Antwort auf die Frage enthalten, bitte antworte dass die Frage mit den gegebenen Informationen nicht zu beantworten ist. **Answer Original Llama 2 13b Chat**: > Die Mongolische Volksrepublik hat nicht dem Warschauer Pakt beigetreten, weil es keinen Beitritt gab, sondern nur einen Beobachterstatus. (Quelle: Der Kontext) > **Answer Llama 2 13b Chat German**: > Aufgrund des aufkommenden Chinesisch-sowjetischen Zerwürfnisses gab es keinen Beitritt, sondern es blieb bei einem Beobachterstatus ### Example 3 (RAG / Factual Retrieval negative): **Question**: > Nach was benannte Spielberg seine Produktionsfirma Anfang der 1980er? > > Kontext zur Beantwortung der Frage: Webbrowser > > == Marktanteile und deren Messung == Bild zeigt die lt. Statistik von StatCounter meistverwendeten Browser nach Ländern 9/2019. Die Statistik für März 2020 ist über folgenden Weblink abrufbar: Die tatsächliche Verbreitung eines Webbrowsers ist nicht zweifelsfrei feststellbar. Verschiedene Anbieter veröffentlichen Statistiken über die Verbreitung von Webbrowsern aufgrund unterschiedlicher häufig recht begrenzter Datenbasen. Da die generelle Verbreitungsrate eines Browsers von verschiedensten Faktoren beeinflusst wird, sind diese Statistiken unterschiedlich aussagekräftig und kommen zu teilweise stark unterschiedlichen, scheinbar widersprüchlichen Ergebnissen. So schwankt die Verbreitung eines Browsers je nach Themengebiet einer aufgerufenen Webseite, Herkunftsregion der aufrufenden Person und dem Zeitpunkt der Messung. Beispielsweise können Benutzer an ihrem Arbeitsplatz an die Verwendung eines vorgegebenen Webbrowsers gebunden sein, privat jedoch einen anderen Browser bevorzugen und verwenden. Auch verschiedene Ereignisse führen zu starken Schwankungen. So steigt der Marktanteil bei der Veröffentlichung einer neuen Hauptversion oder sinkt beim Bekanntwerden einer Sicherheitslücke. Insgesamt geben Browserstatistiken nur die grobe Verteilung und ggf. Trends an, da jede Statistik nur auf der Auswertung der Logdateien ausgewählter Websites beruht und es keine unbestritten repräsentative Auswahl dafür zu verwendender Websites gibt. Die Messungen werden in der Regel anhand des sogenannten ''User-Agent''-Headers durchgeführt, mit der sich der verwendete Browser identifizieren lässt und die dieser bei jeder Anfrage an den Server übermittelt. Bei praktisch allen Browsern kann diese Kennung vom Benutzer verändert werden. Die weltweiten Marktanteile von Desktop-Browsern sehen laut Netmarketshare Stand Juli 2018 und September 2019 prozentual wie folgt aus: Bei Berücksichtigung auch der Browser in den inzwischen sehr häufig verwendeten mobilen Geräten wie Smartphone, Tablet etc. ergibt sich laut den veröffentlichten Statistiken von Netmarketshare, für September 2019 folgendes Bild: Chrome 66 %, Safari 19 %, Firefox 4 %, IE 2,6 %, Edge 2,5 %, QQ 1,4 % etc. Die Verbreitung laut Statistik von StatCounter der verschiedenen Browser auf mobilen Geräten ist hier abrufbar. Die Auswahl der Anzeige ist editierbar. > > Bitte beantworte die Frage mit einem kurzen, wörtlichen Zitat aus dem Kontext. Sollte der Kontext keine Antwort auf die Frage enthalten, bitte antworte dass die Frage mit den gegebenen Informationen nicht zu beantworten ist. **Answer Original Llama 2 13b Chat**: > Sure! Here's the answer to your question with a direct quote from the context: "According to Netmarketshare, in July 2018 and September 2019, the worldwide market share of desktop browsers looked like this: Chrome 66%, Safari 19%, Firefox 4%, IE 2.6%, Edge 2.5%, QQ 1.4% etc." So, according to Netmarketshare's statistics, in July 2018 and September 2019, Google Chrome had a market share of 66%. > **Answer Llama 2 13b Chat German**: > Mit den gegebenen Informationen ist diese Frage nicht zu beantworten. ---------------------------- ---------------------------- # Original **Llama 2** model card Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
hf-tiny-model-private/tiny-random-BioGptForCausalLM
hf-tiny-model-private
"2023-03-29T18:35:34Z"
39,627
0
transformers
[ "transformers", "pytorch", "biogpt", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-29T18:35:29Z"
Entry not found