modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
Qwen/Qwen2-7B
Qwen
"2024-06-06T14:41:44Z"
57,379
81
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-04T13:35:27Z"
--- language: - en pipeline_tag: text-generation tags: - pretrained license: apache-2.0 --- # Qwen2-7B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 7B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ### Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-7B performance | Datasets | Mistral-7B | Gemma-7B | Llama-3-8B | Qwen1.5-7B | Qwen2-7B | | :--------| :---------: | :------------: | :------------: | :------------: | :------------: | |# Params | 7.2B | 8.5B | 8.0B | 7.7B | 7.6B | |# Non-emb Params | 7.0B | 7.8B | 7.0B | 6.5B | 6.5B | | ***English*** | | | | | | |MMLU | 64.2 | 64.6 | 66.6 | 61.0 | **70.3** | |MMLU-Pro | 30.9 | 33.7 | 35.4 | 29.9 | **40.0** | |GPQA | 24.7 | 25.7 | 25.8 | 26.7 | **31.8** | |Theorem QA | 19.2 | 21.5 | 22.1 | 14.2 | **31.1** | |BBH | 56.1 | 55.1 | 57.7 | 40.2 | **62.6** | |HellaSwag | **83.2** | 82.2 | 82.1 | 78.5 | 80.7 | |Winogrande | 78.4 | **79.0** | 77.4 | 71.3 | 77.0 | |ARC-C | 60.0 | **61.1** | 59.3 | 54.2 | 60.6 | |TruthfulQA | 42.2 | 44.8 | 44.0 | 51.1 | **54.2** | | ***Coding*** | | | | | | |HumanEval | 29.3 | 37.2 | 33.5 | 36.0 | **51.2** | |MBPP | 51.1 | 50.6 | 53.9 | 51.6 | **65.9** | |EvalPlus | 36.4 | 39.6 | 40.3 | 40.0 | **54.2** | |MultiPL-E | 29.4 | 29.7 | 22.6 | 28.1 | **46.3** | | ***Mathematics*** | | | | | | |GSM8K | 52.2 | 46.4 | 56.0 | 62.5 | **79.9** | |MATH | 13.1 | 24.3 | 20.5 | 20.3 | **44.2** | | ***Chinese*** | | | | | | |C-Eval | 47.4 | 43.6 | 49.5 | 74.1 | **83.2** | |CMMLU | - | - | 50.8 | 73.1 | **83.9** | | ***Multilingual*** | | | | | | |Multi-Exam | 47.1 | 42.7 | 52.3 | 47.7 | **59.2** | |Multi-Understanding | 63.3 | 58.3 | 68.6 | 67.6 | **72.0** | |Multi-Mathematics | 26.3 | 39.1 | 36.3 | 37.3 | **57.5** | |Multi-Translation | 23.3 | 31.2 | **31.9** | 28.4 | 31.5 | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
duxprajapati/symptom-disease-model
duxprajapati
"2023-08-28T10:03:44Z"
57,269
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "en", "dataset:duxprajapati/symptom-disease-dataset", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-08-22T12:40:18Z"
--- datasets: - duxprajapati/symptom-disease-dataset language: - en pipeline_tag: text-classification ---
mradermacher/Smaug-Qwen2-72B-Instruct-GGUF
mradermacher
"2024-06-27T16:41:33Z"
57,233
0
transformers
[ "transformers", "gguf", "chat", "en", "base_model:abacusai/Smaug-Qwen2-72B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-27T05:59:42Z"
--- base_model: abacusai/Smaug-Qwen2-72B-Instruct language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE license_name: tongyi-qianwen quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/abacusai/Smaug-Qwen2-72B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.IQ3_XS.gguf) | IQ3_XS | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.IQ3_M.gguf) | IQ3_M | 35.6 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Qwen2-72B-Instruct-GGUF/resolve/main/Smaug-Qwen2-72B-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Qwen/Qwen1.5-1.8B
Qwen
"2024-04-05T10:39:41Z"
57,187
42
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-22T16:53:32Z"
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Qwen1.5-1.8B ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in Chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
upstage/SOLAR-10.7B-Instruct-v1.0
upstage
"2024-04-16T09:46:14Z"
57,175
594
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:c-s-ale/alpaca-gpt4-data", "dataset:Open-Orca/OpenOrca", "dataset:Intel/orca_dpo_pairs", "dataset:allenai/ultrafeedback_binarized_cleaned", "arxiv:2312.15166", "arxiv:2403.19270", "base_model:upstage/SOLAR-10.7B-v1.0", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T12:39:22Z"
--- datasets: - c-s-ale/alpaca-gpt4-data - Open-Orca/OpenOrca - Intel/orca_dpo_pairs - allenai/ultrafeedback_binarized_cleaned language: - en license: cc-by-nc-4.0 base_model: - upstage/SOLAR-10.7B-v1.0 --- <p align="left"> <a href="https://go.upstage.ai/solar-obt-hf-modelcardv1-instruct"> <img src="https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/solar-api-banner.png" width="100%"/> </a> <p> # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!** **(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)** # **Introduction** We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B. We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model. SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table. Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166). # **Instruction Fine-Tuning Strategy** We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1]. We used a mixture of the following datasets - c-s-ale/alpaca-gpt4-data (SFT) - Open-Orca/OpenOrca (SFT) - in-house generated data utilizing Metamath [2] (SFT, DPO) - Intel/orca_dpo_pairs (DPO) - allenai/ultrafeedback_binarized_cleaned (DPO) where we were careful of data contamination by not using GSM8K samples when generating data and filtering tasks when applicable via the following list. ```python filtering_task_list = [ 'task228_arc_answer_generation_easy', 'ai2_arc/ARC-Challenge:1.0.0', 'ai2_arc/ARC-Easy:1.0.0', 'task229_arc_answer_generation_hard', 'hellaswag:1.1.0', 'task1389_hellaswag_completion', 'cot_gsm8k', 'cot_gsm8k_ii', 'drop:2.0.0', 'winogrande:1.1.0' ] ``` Using the datasets mentioned above, we applied SFT and iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model. [1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS. [2] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J.T., Li, Z., Weller, A. and Liu, W., 2023. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284. # **Data Contamination Test Results** Recently, there have been contamination issues in some models on the LLM leaderboard. We note that we made every effort to exclude any benchmark-related datasets from training. We also ensured the integrity of our model by conducting a data contamination test [3] that is also used by the HuggingFace team [4, 5]. Our results, with `result < 0.1, %:` being well below 0.9, indicate that our model is free from contamination. *The data contamination test results of HellaSwag and Winograde will be added once [3] supports them.* | Model | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **SOLAR-10.7B-Instruct-v1.0**| result < 0.1, %: 0.06 |result < 0.1, %: 0.15 | result < 0.1, %: 0.28 | result < 0.1, %: 0.70 | [3] https://github.com/swj0419/detect-pretrain-code-contamination [4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06 [5] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230 # **Evaluation Results** | Model | H6 | Model Size | |----------------------------------------|-------|------------| | **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** | | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B | | 01-ai/Yi-34B-200K | 70.81 | ~ 34B | | 01-ai/Yi-34B | 69.42 | ~ 34B | | mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B | | meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B | | tiiuae/falcon-180B | 67.85 | ~ 180B | | **SOLAR-10.7B-v1.0** | **66.04** | **~11B** | | mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B | | Qwen/Qwen-14B | 65.86 | ~ 14B | | 01-ai/Yi-34B-Chat | 65.32 | ~34B | | meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B | | mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B | | mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B | # **Usage Instructions** This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat. ### **Version** Make sure you have the correct version of the transformers library installed: ```sh pip install transformers==4.35.2 ``` ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0") model = AutoModelForCausalLM.from_pretrained( "Upstage/SOLAR-10.7B-Instruct-v1.0", device_map="auto", torch_dtype=torch.float16, ) ``` ### **Conducting Single-Turn Conversation** ```python conversation = [ {'role': 'user', 'content': 'Hello?'} ] prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, use_cache=True, max_length=4096) output_text = tokenizer.decode(outputs[0]) print(output_text) ``` Below is an example of the output. ``` <s> ### User: Hello? ### Assistant: Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s> ``` ### **License** - [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0 - [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0 - Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0. ### **How to Cite** Please cite the following papers using the below format when using this model. ```bibtex @misc{kim2023solar, title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling}, author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim}, year={2023}, eprint={2312.15166}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtext @misc{kim2024sdpo, title={sDPO: Don't Use Your Data All at Once}, author={Dahyun Kim and Yungi Kim and Wonho Song and Hyeonwoo Kim and Yunsu Kim and Sanghoon Kim and Chanjun Park}, year={2024}, eprint={2403.19270}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### **The Upstage AI Team** ### Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai ### **Contact Us** ### Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@upstage.ai](mailto:contact@upstage.ai)
luminar9/bert-finetuned-368items
luminar9
"2024-05-11T07:06:05Z"
57,173
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-11T07:02:44Z"
Entry not found
lpiccinelli/unidepth-v2-vitl14
lpiccinelli
"2024-06-12T12:46:03Z"
57,067
0
UniDepth
[ "UniDepth", "pytorch", "safetensors", "monocular-metric-depth-estimation", "pytorch_model_hub_mixin", "model_hub_mixin", "region:us" ]
null
"2024-06-12T12:39:28Z"
--- library_name: UniDepth tags: - monocular-metric-depth-estimation - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/lpiccinelli-eth/UniDepth - Docs: [More Information Needed]
Dremmar/nsfw-xl
Dremmar
"2024-01-07T11:19:41Z"
57,053
48
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2024-01-07T11:18:33Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "UNICODE\0\0a\0n\0a\0l\0o\0g\0 \0f\0i\0l\0m\0 \0p\0h\0o\0t\0o\0 \0w\0o\0m\0a\0n\0,\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0h\0e\0a\0t\0s\0h\0o\0t\0,\0 \0f\0a\0c\0i\0n\0g\0 \0v\0i\0e\0w\0e\0r\0 \0<\0l\0o\0r\0a\0:\0n\0s\0f\0w\0-\0x\0l\0-\02\0.\00\0:\01\0>\0 \0.\0 \0f\0a\0d\0e\0d\0 \0f\0i\0l\0m\0,\0 \0d\0e\0s\0a\0t\0u\0r\0a\0t\0e\0d\0,\0 \03\05\0m\0m\0 \0p\0h\0o\0t\0o\0,\0 \0g\0r\0a\0i\0n\0y\0,\0 \0v\0i\0g\0n\0e\0t\0t\0e\0,\0 \0v\0i\0n\0t\0a\0g\0e\0,\0 \0K\0o\0d\0a\0c\0h\0r\0o\0m\0e\0,\0 \0L\0o\0m\0o\0g\0r\0a\0p\0h\0y\0,\0 \0s\0t\0a\0i\0n\0e\0d\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0f\0o\0u\0n\0d\0 \0f\0o\0o\0t\0a\0g\0e\0" output: url: images/00097-3192725504.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: null --- # nsfw-xl <Gallery /> ## Model description just copy of https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;141300&#x2F;nsfw-xl ## Download model Weights for this model are available in Safetensors format. [Download](/Dremmar/nsfw-xl/tree/main) them in the Files & versions tab.
internlm/internlm2-7b
internlm
"2024-07-02T12:26:11Z"
57,041
36
transformers
[ "transformers", "pytorch", "internlm2", "text-generation", "custom_code", "arxiv:2403.17297", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-12T06:18:18Z"
--- pipeline_tag: text-generation license: other --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297) </div> ## Introduction The second generation of the InternLM model, InternLM2, includes models at two scales: 7B and 20B. For the convenience of users and researchers, we have open-sourced four versions of each scale of the model, which are: - internlm2-base: A high-quality and highly adaptable model base, serving as an excellent starting point for deep domain adaptation. - internlm2 (**recommended**): Built upon the internlm2-base, this version has further pretrained on domain-specific corpus. It shows outstanding performance in evaluations while maintaining robust general language abilities, making it our recommended choice for most applications. - internlm2-chat-sft: Based on the Base model, it undergoes supervised human alignment training. - internlm2-chat (**recommended**): Optimized for conversational interaction on top of the internlm2-chat-sft through RLHF, it excels in instruction adherence, empathetic chatting, and tool invocation. The base model of InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. ## InternLM2-7B ### Performance Evaluation We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass). Some of the evaluation results are shown in the table below. You are welcome to visit the [OpenCompass Leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results. | Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass). - The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass). **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ### Import from Transformers To load the InternLM2-7B model using Transformers, use the following code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-7b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() inputs = tokenizer(["A beautiful flower"], return_tensors="pt") for k,v in inputs.items(): inputs[k] = v.cuda() gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.0} output = model.generate(**inputs, **gen_kwargs) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) # A beautiful flowering shrub with clusters of pinkish white flowers in the summer. The foliage is glossy green with a hint of bronze. A great plant for small gardens or as a pot plant. Can be grown as a hedge or as a single specimen plant. ``` ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>. ## Citation ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## 简介 第二代浦语模型, InternLM2 包含 7B 和 20B 两个量级的模型。为了方便用户使用和研究,每个量级的模型我们总共开源了四个版本的模型,他们分别是 - internlm2-base: 高质量和具有很强可塑性的模型基座,是模型进行深度领域适配的高质量起点; - internlm2(**推荐**): 在internlm2-base基础上,进一步在特定领域的语料上进行预训练,在评测中成绩优异,同时保持了很好的通用语言能力,是我们推荐的在大部分应用中考虑选用的优秀基座; - internlm2-chat-sft:在Base基础上,进行有监督的人类对齐训练; - internlm2-chat(**推荐**):在internlm2-chat-sft基础上,经过RLHF,面向对话交互进行了优化,具有很好的指令遵循、共情聊天和调用工具等的能力。 InternLM2 的基础模型具备以下的技术特点 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 ## InternLM2-7B ### 性能评测 我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。 | 评测集 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - 以上评测结果基于 [OpenCompass](https://github.com/open-compass/opencompass) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/open-compass/opencompass) 中提供的配置文件。 - 评测数据会因 [OpenCompass](https://github.com/open-compass/opencompass) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/open-compass/opencompass) 最新版的评测结果为主。 **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ### 通过 Transformers 加载 通过以下的代码加载 InternLM2-7B 模型进行文本续写 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-7b", trust_remote_code=True) # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,有可能导致显存不足 model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() inputs = tokenizer(["来到美丽的大自然"], return_tensors="pt") for k,v in inputs.items(): inputs[k] = v.cuda() gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.0} output = model.generate(**inputs, **gen_kwargs) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) # 来到美丽的大自然 # 走进那迷人的花园 # 鸟儿在枝头歌唱 # 花儿在微风中翩翩起舞 # 我们坐在草地上 # 仰望蔚蓝的天空 # 白云像棉花糖一样柔软 # 阳光温暖着我们的脸庞 # 大自然的美景 # 让我们感到无比的幸福 # 让我们心旷神怡 # 让我们感到无比的快乐 # 让我们陶醉其中 # 让我们流连忘返 # 让我们忘记所有的烦恼 # 让我们尽情享受这美好的时光 # 让我们珍惜这美好的瞬间 # 让我们感恩大自然 # 让我们与大自然和谐共处 # 让我们共同保护这美丽的家园 # 让我们永远保持一颗纯真的心灵 ``` ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。 ## 引用 ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k
timm
"2024-02-10T23:38:09Z"
56,947
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "arxiv:2210.08402", "arxiv:2201.03545", "arxiv:2103.00020", "license:apache-2.0", "region:us" ]
image-classification
"2023-03-31T22:51:41Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - laion-2b --- # Model card for convnext_xxlarge.clip_laion2b_soup_ft_in1k A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman. Please see related OpenCLIP model cards for more details on pretrain: * https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 846.5 - GMACs: 198.1 - Activations (M): 124.5 - Image size: 256 x 256 - **Papers:** - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020 - **Original:** https://github.com/mlfoundations/open_clip - **Pretrain Dataset:** LAION-2B - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_xxlarge.clip_laion2b_soup_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xxlarge.clip_laion2b_soup_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 384, 64, 64]) # torch.Size([1, 768, 32, 32]) # torch.Size([1, 1536, 16, 16]) # torch.Size([1, 3072, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_xxlarge.clip_laion2b_soup_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 3072, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
ntc-ai/SDXL-LoRA-slider.cinematic-lighting
ntc-ai
"2024-01-27T01:28:52Z"
56,812
5
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
"2024-01-27T01:28:49Z"
--- language: - en thumbnail: "images/evaluate/cinematic lighting.../cinematic lighting_17_3.0.png" widget: - text: cinematic lighting output: url: images/cinematic lighting_17_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_19_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_20_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_21_3.0.png - text: cinematic lighting output: url: images/cinematic lighting_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "cinematic lighting" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - cinematic lighting (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/cinematic lighting_17_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_17_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_19_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_19_3.0.png" width=256 height=256 /> | | <img src="images/cinematic lighting_20_-3.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_0.0.png" width=256 height=256 /> | <img src="images/cinematic lighting_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` cinematic lighting ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.cinematic-lighting', weight_name='cinematic lighting.safetensors', adapter_name="cinematic lighting") # Activate the LoRA pipe.set_adapters(["cinematic lighting"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, cinematic lighting" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
ntc-ai/SDXL-LoRA-slider.anime
ntc-ai
"2024-02-06T00:29:53Z"
56,778
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
"2023-12-11T09:45:31Z"
--- language: - en thumbnail: "images/anime_17_3.0.png" widget: - text: anime output: url: images/anime_17_3.0.png - text: anime output: url: images/anime_19_3.0.png - text: anime output: url: images/anime_20_3.0.png - text: anime output: url: images/anime_21_3.0.png - text: anime output: url: images/anime_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "anime" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - anime (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/anime_17_-3.0.png" width=256 height=256 /> | <img src="images/anime_17_0.0.png" width=256 height=256 /> | <img src="images/anime_17_3.0.png" width=256 height=256 /> | | <img src="images/anime_19_-3.0.png" width=256 height=256 /> | <img src="images/anime_19_0.0.png" width=256 height=256 /> | <img src="images/anime_19_3.0.png" width=256 height=256 /> | | <img src="images/anime_20_-3.0.png" width=256 height=256 /> | <img src="images/anime_20_0.0.png" width=256 height=256 /> | <img src="images/anime_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/36f7a252-7e7a-4e7d-9ae0-fc31cdc48fef](https://sliders.ntcai.xyz/sliders/app/loras/36f7a252-7e7a-4e7d-9ae0-fc31cdc48fef) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` anime ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.anime', weight_name='anime.safetensors', adapter_name="anime") # Activate the LoRA pipe.set_adapters(["anime"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, anime" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14600+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
prajjwal1/bert-mini
prajjwal1
"2021-10-27T18:27:38Z"
56,656
17
transformers
[ "transformers", "pytorch", "BERT", "MNLI", "NLI", "transformer", "pre-training", "en", "arxiv:1908.08962", "arxiv:2110.01518", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
ntc-ai/SDXL-LoRA-slider.pixar-style
ntc-ai
"2024-02-06T00:30:20Z"
56,644
5
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
"2023-12-11T16:46:56Z"
--- language: - en thumbnail: "images/pixar-style_17_3.0.png" widget: - text: pixar-style output: url: images/pixar-style_17_3.0.png - text: pixar-style output: url: images/pixar-style_19_3.0.png - text: pixar-style output: url: images/pixar-style_20_3.0.png - text: pixar-style output: url: images/pixar-style_21_3.0.png - text: pixar-style output: url: images/pixar-style_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "pixar-style" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - pixar-style (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/pixar-style_17_-3.0.png" width=256 height=256 /> | <img src="images/pixar-style_17_0.0.png" width=256 height=256 /> | <img src="images/pixar-style_17_3.0.png" width=256 height=256 /> | | <img src="images/pixar-style_19_-3.0.png" width=256 height=256 /> | <img src="images/pixar-style_19_0.0.png" width=256 height=256 /> | <img src="images/pixar-style_19_3.0.png" width=256 height=256 /> | | <img src="images/pixar-style_20_-3.0.png" width=256 height=256 /> | <img src="images/pixar-style_20_0.0.png" width=256 height=256 /> | <img src="images/pixar-style_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/c48d077a-a9e0-4bc0-a9a8-e607835a7f1d](https://sliders.ntcai.xyz/sliders/app/loras/c48d077a-a9e0-4bc0-a9a8-e607835a7f1d) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` pixar-style ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.pixar-style', weight_name='pixar-style.safetensors', adapter_name="pixar-style") # Activate the LoRA pipe.set_adapters(["pixar-style"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, pixar-style" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
ntc-ai/SDXL-LoRA-slider.raw
ntc-ai
"2024-02-06T00:34:31Z"
56,548
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
"2023-12-17T10:32:34Z"
--- language: - en thumbnail: "images/raw_17_3.0.png" widget: - text: raw output: url: images/raw_17_3.0.png - text: raw output: url: images/raw_19_3.0.png - text: raw output: url: images/raw_20_3.0.png - text: raw output: url: images/raw_21_3.0.png - text: raw output: url: images/raw_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "raw" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - raw (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/raw_17_-3.0.png" width=256 height=256 /> | <img src="images/raw_17_0.0.png" width=256 height=256 /> | <img src="images/raw_17_3.0.png" width=256 height=256 /> | | <img src="images/raw_19_-3.0.png" width=256 height=256 /> | <img src="images/raw_19_0.0.png" width=256 height=256 /> | <img src="images/raw_19_3.0.png" width=256 height=256 /> | | <img src="images/raw_20_-3.0.png" width=256 height=256 /> | <img src="images/raw_20_0.0.png" width=256 height=256 /> | <img src="images/raw_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/c2a33da7-6834-473d-8e52-0b4a5637fbde](https://sliders.ntcai.xyz/sliders/app/loras/c2a33da7-6834-473d-8e52-0b4a5637fbde) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` raw ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.raw', weight_name='raw.safetensors', adapter_name="raw") # Activate the LoRA pipe.set_adapters(["raw"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, raw" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
naver-clova-ocr/bros-base-uncased
naver-clova-ocr
"2022-04-05T13:56:46Z"
56,503
13
transformers
[ "transformers", "pytorch", "bros", "feature-extraction", "arxiv:2108.04539", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
# BROS GitHub: https://github.com/clovaai/bros ## Introduction BROS (BERT Relying On Spatiality) is a pre-trained language model focusing on text and layout for better key information extraction from documents.<br> Given the OCR results of the document image, which are text and bounding box pairs, it can perform various key information extraction tasks, such as extracting an ordered item list from receipts.<br> For more details, please refer to our paper: BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents<br> Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park<br> AAAI 2022 - Main Technical Track [[arXiv]](https://arxiv.org/abs/2108.04539) ## Pre-trained models | name | # params | Hugging Face - Models | |---------------------|---------:|-------------------------------------------------------------------------------------------------| | bros-base-uncased (**this**) | < 110M | [naver-clova-ocr/bros-base-uncased](https://huggingface.co/naver-clova-ocr/bros-base-uncased) | | bros-large-uncased | < 340M | [naver-clova-ocr/bros-large-uncased](https://huggingface.co/naver-clova-ocr/bros-large-uncased) |
microsoft/Phi-3-mini-4k-instruct-gguf
microsoft
"2024-07-02T19:36:04Z"
56,415
396
null
[ "gguf", "nlp", "code", "text-generation", "en", "license:mit", "region:us" ]
text-generation
"2024-04-22T17:02:08Z"
--- license: mit license_link: >- https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code --- ## Model Summary This repo provides the GGUF format for the Phi-3-Mini-4K-Instruct. The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 on Hugging Face](https://aka.ms/phi3-hf) + Phi-3 ONNX: [4K](https://aka.ms/phi3-mini-4k-instruct-onnx) and [128K](https://aka.ms/phi3-mini-128k-instruct-onnx) This repo provides GGUF files for the Phi-3 Mini-4K-Instruct model. | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Phi-3-mini-4k-instruct-q4.gguf](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct-q4.gguf) | Q4_K_M | 4 | 2.2 GB| medium, balanced quality - recommended | | [Phi-3-mini-4k-instruct-fp16.gguf](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/blob/main/Phi-3-mini-4k-instruct-fp16.gguf) | None | 16 | 7.2 GB | minimal quality loss | ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require 1) memory/compute constrained environments 2) latency bound scenarios 3) strong reasoning (especially math and logic) 4) long context Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## Chat Format: Given the nature of the training data, the Phi-3-Mini-4K-instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after "<|assistant|>" . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ## How to download GGUF files 1. **Install Hugging Face CLI:** ``` pip install huggingface-hub>=0.17.1 ``` 2. **Login to Hugging Face:** ``` huggingface-cli login ``` 3. **Download the GGUF model:** ``` huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir . --local-dir-use-symlinks False ``` ## How to use with Ollama 1. **Install Ollama:** ``` curl -fsSL https://ollama.com/install.sh | sh ``` 2. **Run the *phi3* model:** ``` ollama run phi3 ``` ### Building from `Modelfile` Assuming that you have already downloaded GGUF files, here is how you can use them with [Ollama](https://ollama.com/): 1. **Get the Modelfile:** ``` huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Modelfile_q4 --local-dir /path/to/your/local/dir ``` 2. Build the Ollama Model: Use the Ollama CLI to create your model with the following command: ``` ollama create phi3 -f Modelfile_q4 ``` 3. **Run the *phi3* model:** Now you can run the Phi-3-Mini-4k-Instruct model with Ollama using the following command: ``` ollama run phi3 "Your prompt here" ``` Replace "Your prompt here" with the actual prompt you want to use for generating responses from the model. ## How to use with Llamafile: Assuming that you already have GGUF files downloaded. Here is how you can use the GGUF model with [Llamafile](https://github.com/Mozilla-Ocho/llamafile): 1. **Download Llamafile-0.7.3** ``` wget https://github.com/Mozilla-Ocho/llamafile/releases/download/0.7.3/llamafile-0.7.3 ``` 2. **Run the model with chat format prompt:** ```markdown <|user|>\nHow to explain Internet for a medieval knight?\n<|end|>\n<|assistant|> ``` ``` ./llamafile-0.7.3 -ngl 9999 -m Phi-3-mini-4k-instruct-q4.gguf --temp 0.6 -p "<|user|>\nHow to explain Internet for a medieval knight?\n<|end|>\n<|assistant|>" ``` 3. **Run with a chat interface:** ``` ./llamafile-0.7.3 -ngl 9999 -m Phi-3-mini-4k-instruct-q4.gguf ``` Your browser should open automatically and display a chat interface. (If it doesn't, just open your browser and point it at http://localhost:8080) ## How to run on Python: 1. **Install llama-cpp-python:** ``` ! CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python ``` 2. **Run the model:** ```python from llama_cpp import Llama llm = Llama( model_path="./Phi-3-mini-4k-instruct-q4.gguf", # path to GGUF file n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35, # The number of layers to offload to GPU, if you have GPU acceleration available. Set to 0 if no GPU acceleration is available on your system. ) prompt = "How to explain Internet to a medieval knight?" # Simple inference example output = llm( f"<|user|>\n{prompt}<|end|>\n<|assistant|>", max_tokens=256, # Generate up to 256 tokens stop=["<|end|>"], echo=True, # Whether to echo the prompt ) print(output['choices'][0]['text']) ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUS: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ### License The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
mradermacher/Swallow-70b-hf-GGUF
mradermacher
"2024-07-01T06:27:40Z"
56,329
0
transformers
[ "transformers", "gguf", "en", "ja", "base_model:tokyotech-llm/Swallow-70b-hf", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-06-30T06:18:55Z"
--- base_model: tokyotech-llm/Swallow-70b-hf language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tokyotech-llm/Swallow-70b-hf <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Swallow-70b-hf-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q2_K.gguf) | Q2_K | 25.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.IQ3_XS.gguf) | IQ3_XS | 28.5 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.IQ3_S.gguf) | IQ3_S | 30.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q3_K_S.gguf) | Q3_K_S | 30.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.IQ3_M.gguf) | IQ3_M | 31.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q3_K_M.gguf) | Q3_K_M | 33.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q3_K_L.gguf) | Q3_K_L | 36.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.IQ4_XS.gguf) | IQ4_XS | 37.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q4_K_S.gguf) | Q4_K_S | 39.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q4_K_M.gguf) | Q4_K_M | 41.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q5_K_S.gguf) | Q5_K_S | 47.7 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q5_K_M.gguf) | Q5_K_M | 49.0 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q6_K.gguf.part2of2) | Q6_K | 56.8 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-hf-GGUF/resolve/main/Swallow-70b-hf.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Lykon/dreamshaper-xl-lightning
Lykon
"2024-03-01T11:46:18Z"
56,313
43
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "stable-diffusion-xl", "stable-diffusion-xl-lightning", "text-to-image", "art", "artistic", "anime", "dreamshaper", "turbo", "lcm", "en", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-21T20:51:01Z"
--- language: - en license: openrail++ tags: - stable-diffusion - stable-diffusion-diffusers - stable-diffusion-xl - stable-diffusion-xl-lightning - text-to-image - art - artistic - diffusers - anime - dreamshaper - turbo - lcm duplicated_from: lykon/dreamshaper-xl-lightning --- # Dreamshaper XL Lightning `lykon/dreamshaper-xl-lightning` is a Stable Diffusion model that has been fine-tuned on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). Please consider supporting me: - on [Patreon](https://www.patreon.com/Lykon275) - or [buy me a coffee](https://snipfeed.co/lykon) ## Diffusers For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation). 1. Installation ``` pip install diffusers transformers accelerate ``` 2. Run ```py from diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler import torch pipe = AutoPipelineForText2Image.from_pretrained('lykon/dreamshaper-xl-lightning', torch_dtype=torch.float16, variant="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors" generator = torch.manual_seed(0) image = pipe(prompt, num_inference_steps=4, guidance_scale=2).images[0] image.save("./image.png") ```
playgroundai/playground-v2.5-1024px-aesthetic
playgroundai
"2024-03-15T00:00:20Z"
56,122
604
diffusers
[ "diffusers", "safetensors", "text-to-image", "playground", "arxiv:2206.00364", "arxiv:2402.17245", "license:other", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-16T18:46:17Z"
--- license: other license_name: playground-v2dot5-community license_link: https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/blob/main/LICENSE.md tags: - text-to-image - playground inference: parameters: guidance_scale: 3.0 --- # Playground v2.5 – 1024px Aesthetic Model This repository contains a model that generates highly aesthetic images of resolution 1024x1024, as well as portrait and landscape aspect ratios. You can use the model with Hugging Face 🧨 Diffusers. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/HYUUGfU6SOCHsvyeISQ5Y.png) **Playground v2.5** is a diffusion-based text-to-image generative model, and a successor to [Playground v2](https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic). Playground v2.5 is the state-of-the-art open-source model in aesthetic quality. Our user studies demonstrate that our model outperforms SDXL, Playground v2, PixArt-α, DALL-E 3, and Midjourney 5.2. For details on the development and training of our model, please refer to our [blog post](https://blog.playgroundai.com/playground-v2-5/) and [technical report](https://marketing-cdn.playground.com/research/pgv2.5_compressed.pdf). ### Model Description - **Developed by:** [Playground](https://playground.com) - **Model type:** Diffusion-based text-to-image generative model - **License:** [Playground v2.5 Community License](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/blob/main/LICENSE.md) - **Summary:** This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as [Stable Diffusion XL](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl). ### Using the model with 🧨 Diffusers Install diffusers >= 0.27.0 and the relevant dependencies. ``` pip install diffusers>=0.27.0 pip install transformers accelerate safetensors ``` **Notes:** - The pipeline uses the `EDMDPMSolverMultistepScheduler` scheduler by default, for crisper fine details. It's an [EDM formulation](https://arxiv.org/abs/2206.00364) of the DPM++ 2M Karras scheduler. `guidance_scale=3.0` is a good default for this scheduler. - The pipeline also supports the `EDMEulerScheduler` scheduler. It's an [EDM formulation](https://arxiv.org/abs/2206.00364) of the Euler scheduler. `guidance_scale=5.0` is a good default for this scheduler. Then, run the following snippet: ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained( "playgroundai/playground-v2.5-1024px-aesthetic", torch_dtype=torch.float16, variant="fp16", ).to("cuda") # # Optional: Use DPM++ 2M Karras scheduler for crisper fine details # from diffusers import EDMDPMSolverMultistepScheduler # pipe.scheduler = EDMDPMSolverMultistepScheduler() prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt=prompt, num_inference_steps=50, guidance_scale=3).images[0] ``` ### Using the model with Automatic1111/ComfyUI Support coming soon. We will update this model card with instructions when ready. ### User Studies This model card only provides a brief summary of our user study results. For extensive details on how we perform user studies, please check out our [technical report](https://marketing-cdn.playground.com/research/pgv2.5_compressed.pdf). We conducted studies to measure overall aesthetic quality, as well as for the specific areas we aimed to improve with Playground v2.5, namely multi aspect ratios and human preference alignment. #### Comparison to State-of-the-Art ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63855d851769b7c4b10e1f76/V7LFNzgoQJnL__ndU0CnE.png) The aesthetic quality of Playground v2.5 dramatically outperforms the current state-of-the-art open source models SDXL and PIXART-α, as well as Playground v2. Because the performance differential between Playground V2.5 and SDXL was so large, we also tested our aesthetic quality against world-class closed-source models like DALL-E 3 and Midjourney 5.2, and found that Playground v2.5 outperforms them as well. #### Multi Aspect Ratios ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/xMB0r-CmR3N6dABFlcV71.png) Similarly, for multi aspect ratios, we outperform SDXL by a large margin. #### Human Preference Alignment on People-related images ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/7c-8Stw52OsNtUjse8Slv.png) Next, we benchmark Playground v2.5 specifically on people-related images, to test Human Preference Alignment. We compared Playground v2.5 against two commonly-used baseline models: SDXL and RealStock v2, a community fine-tune of SDXL that was trained on a realistic people dataset. Playground v2.5 outperforms both baselines by a large margin. ### MJHQ-30K Benchmark ![image/png](https://cdn-uploads.huggingface.co/production/uploads/636c0c4eaae2da3c76b8a9a3/7tyYDPGUtokh-k18XDSte.png) | Model | Overall FID | | ------------------------------------- | ----- | | SDXL-1-0-refiner | 9.55 | | [playground-v2-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2-1024px-aesthetic) | 7.07 | | [playground-v2.5-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic) | **4.48** | Lastly, we report metrics using our MJHQ-30K benchmark which we [open-sourced](https://huggingface.co/datasets/playgroundai/MJHQ-30K) with the v2 release. We report both the overall FID and per category FID. All FID metrics are computed at resolution 1024x1024. Our results show that Playground v2.5 outperforms both Playground v2 and SDXL in overall FID and all category FIDs, especially in the people and fashion categories. This is in line with the results of the user study, which indicates a correlation between human preferences and the FID score of the MJHQ-30K benchmark. ### How to cite us ``` @misc{li2024playground, title={Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation}, author={Daiqing Li and Aleks Kamko and Ehsan Akhgari and Ali Sabet and Linmiao Xu and Suhail Doshi}, year={2024}, eprint={2402.17245}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
THUDM/glm-4-9b-chat
THUDM
"2024-06-26T03:13:49Z"
56,107
431
transformers
[ "transformers", "safetensors", "chatglm", "glm", "thudm", "custom_code", "zh", "en", "arxiv:2406.12793", "license:other", "region:us" ]
null
"2024-06-04T09:01:42Z"
--- license: other license_name: glm-4 license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE language: - zh - en tags: - glm - chatglm - thudm inference: false --- # GLM-4-9B-Chat Read this in [English](README_en.md). GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。 在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。 除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K 上下文)等高级功能。 本代模型增加了多语言支持,支持包括日语,韩语,德语在内的 26 种语言。我们还推出了支持 1M 上下文长度(约 200 万中文字符)的模型。 ## 评测结果 我们在一些经典任务上对 GLM-4-9B-Chat 模型进行了评测,并得到了如下的结果: | Model | AlignBench-v2 | MT-Bench | IFEval | MMLU | C-Eval | GSM8K | MATH | HumanEval | NCB | |:--------------------|:-------------:|:--------:|:------:|:----:|:------:|:-----:|:----:|:---------:|:----:| | Llama-3-8B-Instruct | 5.12 | 8.00 | 68.58 | 68.4 | 51.3 | 79.6 | 30.0 | 62.2 | 24.7 | | ChatGLM3-6B | 3.97 | 5.50 | 28.1 | 66.4 | 69.0 | 72.3 | 25.7 | 58.5 | 11.3 | | GLM-4-9B-Chat | 6.61 | 8.35 | 69.0 | 72.4 | 75.6 | 79.6 | 50.6 | 71.8 | 32.2 | ### 长文本 在 1M 的上下文长度下进行[大海捞针实验](https://github.com/LargeWorldModel/LWM/blob/main/scripts/eval_needle.py),结果如下: ![needle](https://raw.githubusercontent.com/THUDM/GLM-4/main/resources/eval_needle.jpeg) 在 LongBench-Chat 上对长文本能力进行了进一步评测,结果如下: ![leaderboard](https://raw.githubusercontent.com/THUDM/GLM-4/main/resources/longbench.png) ### 多语言能力 在六个多语言数据集上对 GLM-4-9B-Chat 和 Llama-3-8B-Instruct 进行了测试,测试结果及数据集对应选取语言如下表 | Dataset | Llama-3-8B-Instruct | GLM-4-9B-Chat | Languages |:------------|:-------------------:|:-------------:|:----------------------------------------------------------------------------------------------:| | M-MMLU | 49.6 | 56.6 | all | FLORES | 25.0 | 28.8 | ru, es, de, fr, it, pt, pl, ja, nl, ar, tr, cs, vi, fa, hu, el, ro, sv, uk, fi, ko, da, bg, no | MGSM | 54.0 | 65.3 | zh, en, bn, de, es, fr, ja, ru, sw, te, th | XWinograd | 61.7 | 73.1 | zh, en, fr, jp, ru, pt | XStoryCloze | 84.7 | 90.7 | zh, en, ar, es, eu, hi, id, my, ru, sw, te | XCOPA | 73.3 | 80.1 | zh, et, ht, id, it, qu, sw, ta, th, tr, vi ### 工具调用能力 我们在 [Berkeley Function Calling Leaderboard](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard)上进行了测试并得到了以下结果: | Model | Overall Acc. | AST Summary | Exec Summary | Relevance | |:-----------------------|:------------:|:-----------:|:------------:|:---------:| | Llama-3-8B-Instruct | 58.88 | 59.25 | 70.01 | 45.83 | | gpt-4-turbo-2024-04-09 | 81.24 | 82.14 | 78.61 | 88.75 | | ChatGLM3-6B | 57.88 | 62.18 | 69.78 | 5.42 | | GLM-4-9B-Chat | 81.00 | 80.26 | 84.40 | 87.92 | **本仓库是 GLM-4-9B-Chat 的模型仓库,支持`128K`上下文长度。** ## 运行模型 更多推理代码和依赖信息,请访问我们的 [github](https://github.com/THUDM/GLM-4) 。 ### 使用 transformers 后端进行推理: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4-9b-chat",trust_remote_code=True) query = "你好" inputs = tokenizer.apply_chat_template([{"role": "user", "content": query}], add_generation_prompt=True, tokenize=True, return_tensors="pt", return_dict=True ) inputs = inputs.to(device) model = AutoModelForCausalLM.from_pretrained( "THUDM/glm-4-9b-chat", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).to(device).eval() gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1} with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` 使用 vLLM后端进行推理: ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams # GLM-4-9B-Chat-1M # max_model_len, tp_size = 1048576, 4 # GLM-4-9B-Chat # 如果遇见 OOM 现象,建议减少max_model_len,或者增加tp_size max_model_len, tp_size = 131072, 1 model_name = "THUDM/glm-4-9b-chat" prompt = [{"role": "user", "content": "你好"}] tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) llm = LLM( model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True, # GLM-4-9B-Chat-1M 如果遇见 OOM 现象,建议开启下述参数 # enable_chunked_prefill=True, # max_num_batched_tokens=8192 ) stop_token_ids = [151329, 151336, 151338] sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids) inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) outputs = llm.generate(prompts=inputs, sampling_params=sampling_params) print(outputs[0].outputs[0].text) ``` ## 协议 GLM-4 模型的权重的使用则需要遵循 [LICENSE](LICENSE)。 ## 引用 如果你觉得我们的工作有帮助的话,请考虑引用下列论文。 ``` @misc{glm2024chatglm, title={ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools}, author={Team GLM and Aohan Zeng and Bin Xu and Bowen Wang and Chenhui Zhang and Da Yin and Diego Rojas and Guanyu Feng and Hanlin Zhao and Hanyu Lai and Hao Yu and Hongning Wang and Jiadai Sun and Jiajie Zhang and Jiale Cheng and Jiayi Gui and Jie Tang and Jing Zhang and Juanzi Li and Lei Zhao and Lindong Wu and Lucen Zhong and Mingdao Liu and Minlie Huang and Peng Zhang and Qinkai Zheng and Rui Lu and Shuaiqi Duan and Shudan Zhang and Shulin Cao and Shuxun Yang and Weng Lam Tam and Wenyi Zhao and Xiao Liu and Xiao Xia and Xiaohan Zhang and Xiaotao Gu and Xin Lv and Xinghan Liu and Xinyi Liu and Xinyue Yang and Xixuan Song and Xunkai Zhang and Yifan An and Yifan Xu and Yilin Niu and Yuantao Yang and Yueyan Li and Yushi Bai and Yuxiao Dong and Zehan Qi and Zhaoyu Wang and Zhen Yang and Zhengxiao Du and Zhenyu Hou and Zihan Wang}, year={2024}, eprint={2406.12793}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ```
timm/efficientnetv2_rw_m.agc_in1k
timm
"2023-04-27T21:12:50Z"
56,032
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:2104.00298", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-12T23:58:32Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnetv2_rw_m.agc_in1k A EfficientNet-v2 image classification model. This is a `timm` specific variation of the architecture. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes * SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping). * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 53.2 - GMACs: 12.7 - Activations (M): 47.1 - Image size: train = 320 x 320, test = 416 x 416 - **Papers:** - EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnetv2_rw_m.agc_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnetv2_rw_m.agc_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 160, 160]) # torch.Size([1, 56, 80, 80]) # torch.Size([1, 80, 40, 40]) # torch.Size([1, 192, 20, 20]) # torch.Size([1, 328, 10, 10]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnetv2_rw_m.agc_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2152, 10, 10) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2021efficientnetv2, title={Efficientnetv2: Smaller models and faster training}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={10096--10106}, year={2021}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
hpcai-tech/OpenSora-VAE-v1.2
hpcai-tech
"2024-06-17T07:02:33Z"
55,844
37
transformers
[ "transformers", "safetensors", "VideoAutoencoderPipeline", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-17T03:43:50Z"
--- license: apache-2.0 --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63993d721fad4d6eb265d999/UXleJWJExX2WlBizxzYxn.png" width="250"/> </p> # Open-Sora VAE-v1.2 Weights This repository stores the weights of the VAE released by the Open-Sora team. You can visit our project at: - [GitHub](https://github.com/hpcaitech/Open-Sora) - [Gallery](https://hpcaitech.github.io/Open-Sora/) - [Gradio Demo](https://huggingface.co/spaces/hpcai-tech/open-sora) The weights are released together with Open-Sora v1.2. We recommend you to use this weights in the [Open-Sora codebase]((https://github.com/hpcaitech/Open-Sora)). If you want to use VAE in your own project, you may use the following sample code. 1. Install `opensora` ```bash pip install git+https://github.com/hpcaitech/Open-Sora.git ``` 2. Use `STDiT3` in your own code ```python from opensora.models.vae.vae import VideoAutoencoderPipeline vae = VideoAutoencoderPipeline.from_pretrained("hpcai-tech/OpenSora-VAE-v1.2") ```
deepset/tinyroberta-squad2
deepset
"2023-09-27T11:51:22Z"
55,841
87
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "en", "dataset:squad_v2", "arxiv:1909.10351", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/tinyroberta-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 78.8627 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNlZDU4ODAxMzY5NGFiMTMyZmQ1M2ZhZjMyODA1NmFlOGMxNzYxNTA4OGE5YTBkZWViZjBkNGQ2ZmMxZjVlMCIsInZlcnNpb24iOjF9.Wgu599r6TvgMLTrHlLMVAbUtKD_3b70iJ5QSeDQ-bRfUsVk6Sz9OsJCp47riHJVlmSYzcDj_z_3jTcUjCFFXBg - type: f1 value: 82.0355 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTFkMzEzMWNiZDRhMGZlODhkYzcwZTZiMDFjZDg2YjllZmUzYWM5NTgwNGQ2NGYyMDk2ZGQwN2JmMTE5NTc3YiIsInZlcnNpb24iOjF9.ChgaYpuRHd5WeDFjtiAHUyczxtoOD_M5WR8834jtbf7wXhdGOnZKdZ1KclmhoI5NuAGc1NptX-G0zQ5FTHEcBA - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 83.860 name: Exact Match - type: f1 value: 90.752 name: F1 - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 25.967 name: Exact Match - type: f1 value: 37.006 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 76.329 name: Exact Match - type: f1 value: 83.292 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 63.915 name: Exact Match - type: f1 value: 78.395 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 80.297 name: Exact Match - type: f1 value: 89.808 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 80.149 name: Exact Match - type: f1 value: 88.321 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 66.959 name: Exact Match - type: f1 value: 79.300 name: F1 --- # tinyroberta-squad2 This is the *distilled* version of the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model. This model has a comparable prediction quality and runs at twice the speed of the base model. ## Overview **Language model:** tinyroberta-squad2 **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 96 n_epochs = 4 base_LM_model = "deepset/tinyroberta-squad2-step1" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride = 128 max_query_length = 64 distillation_loss_weight = 0.75 temperature = 1.5 teacher = "deepset/robert-large-squad2" ``` ## Distillation This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack). Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d). Secondly, we have performed task-specific distillation with [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) as the teacher for prediction layer distillation. ## Usage ### In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/tinyroberta-squad2") # or reader = TransformersReader(model_name_or_path="deepset/tinyroberta-squad2") ``` ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/tinyroberta-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 78.69114798281817, "f1": 81.9198998536977, "total": 11873, "HasAns_exact": 76.19770580296895, "HasAns_f1": 82.66446878592329, "HasAns_total": 5928, "NoAns_exact": 81.17746005046257, "NoAns_f1": 81.17746005046257, "NoAns_total": 5945 ``` ## Authors **Branden Chan:** branden.chan@deepset.ai **Timo Möller:** timo.moeller@deepset.ai **Malte Pietsch:** malte.pietsch@deepset.ai **Tanay Soni:** tanay.soni@deepset.ai **Michel Bartels:** michel.bartels@deepset.ai ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [roberta-base-squad2]([https://huggingface.co/deepset/roberta-base-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
kwoncho/losscut_news_pre2017_2
kwoncho
"2024-06-05T04:25:16Z"
55,832
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-05T04:24:17Z"
Entry not found
Helsinki-NLP/opus-mt-sv-en
Helsinki-NLP
"2024-02-14T17:14:57Z"
55,467
10
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "sv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- pipeline_tag: translation license: apache-2.0 --- ### opus-mt-sv-en * source languages: sv * target languages: en * OPUS readme: [sv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.sv.en | 64.5 | 0.763 |
Ericwang/tiny-random-ast
Ericwang
"2022-12-17T04:34:07Z"
55,347
1
transformers
[ "transformers", "pytorch", "audio-spectrogram-transformer", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-12-17T01:36:12Z"
One custom ast model for testing of HF repos
aubmindlab/bert-base-arabert
aubmindlab
"2023-08-03T12:32:51Z"
55,336
24
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "ar", "arxiv:2003.00104", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: " عاصم +ة لبنان هي [MASK] ." --- # !!! A newer version of this model is available !!! [AraBERTv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) # AraBERT v1 & v2 : Pre-training BERT for Arabic Language Understanding <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/arabert_logo.png" width="100" align="left"/> **AraBERT** is an Arabic pretrained lanaguage model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERT uses the same BERT-Base config. More details are available in the [AraBERT Paper](https://arxiv.org/abs/2003.00104) and in the [AraBERT Meetup](https://github.com/WissamAntoun/pydata_khobar_meetup) There are two versions of the model, AraBERTv0.1 and AraBERTv1, with the difference being that AraBERTv1 uses pre-segmented text where prefixes and suffixes were splitted using the [Farasa Segmenter](http://alt.qcri.org/farasa/segmenter.html). We evalaute AraBERT models on different downstream tasks and compare them to [mBERT]((https://github.com/google-research/bert/blob/master/multilingual.md)), and other state of the art models (*To the extent of our knowledge*). The Tasks were Sentiment Analysis on 6 different datasets ([HARD](https://github.com/elnagara/HARD-Arabic-Dataset), [ASTD-Balanced](https://www.aclweb.org/anthology/D15-1299), [ArsenTD-Lev](https://staff.aub.edu.lb/~we07/Publications/ArSentD-LEV_Sentiment_Corpus.pdf), [LABR](https://github.com/mohamedadaly/LABR)), Named Entity Recognition with the [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp), and Arabic Question Answering on [Arabic-SQuAD and ARCD](https://github.com/husseinmozannar/SOQAL) # AraBERTv2 ## What's New! AraBERT now comes in 4 new variants to replace the old v1 versions: More Detail in the AraBERT folder and in the [README](https://github.com/aub-mind/arabert/blob/master/AraBERT/README.md) and in the [AraBERT Paper](https://arxiv.org/abs/2003.00104v2) Model | HuggingFace Model Name | Size (MB/Params)| Pre-Segmentation | DataSet (Sentences/Size/nWords) | ---|:---:|:---:|:---:|:---: AraBERTv0.2-base | [bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) | 543MB / 136M | No | 200M / 77GB / 8.6B | AraBERTv0.2-large| [bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) | 1.38G 371M | No | 200M / 77GB / 8.6B | AraBERTv2-base| [bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) | 543MB 136M | Yes | 200M / 77GB / 8.6B | AraBERTv2-large| [bert-large-arabertv2](https://huggingface.co/aubmindlab/bert-large-arabertv2) | 1.38G 371M | Yes | 200M / 77GB / 8.6B | AraBERTv0.1-base| [bert-base-arabertv01](https://huggingface.co/aubmindlab/bert-base-arabertv01) | 543MB 136M | No | 77M / 23GB / 2.7B | AraBERTv1-base| [bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) | 543MB 136M | Yes | 77M / 23GB / 2.7B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Better Pre-Processing and New Vocab We identified an issue with AraBERTv1's wordpiece vocabulary. The issue came from punctuations and numbers that were still attached to words when learned the wordpiece vocab. We now insert a space between numbers and characters and around punctuation characters. The new vocabulary was learnt using the `BertWordpieceTokenizer` from the `tokenizers` library, and should now support the Fast tokenizer implementation from the `transformers` library. **P.S.**: All the old BERT codes should work with the new BERT, just change the model name and check the new preprocessing dunction **Please read the section on how to use the [preprocessing function](#Preprocessing)** ## Bigger Dataset and More Compute We used ~3.5 times more data, and trained for longer. For Dataset Sources see the [Dataset Section](#Dataset) Model | Hardware | num of examples with seq len (128 / 512) |128 (Batch Size/ Num of Steps) | 512 (Batch Size/ Num of Steps) | Total Steps | Total Time (in Days) | ---|:---:|:---:|:---:|:---:|:---:|:---: AraBERTv0.2-base | TPUv3-8 | 420M / 207M |2560 / 1M | 384/ 2M | 3M | - AraBERTv0.2-large | TPUv3-128 | 420M / 207M | 13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-base | TPUv3-8 | 520M / 245M |13440 / 250K | 2056 / 300K | 550K | - AraBERTv2-large | TPUv3-128 | 520M / 245M | 13440 / 250K | 2056 / 300K | 550K | - AraBERT-base (v1/v0.1) | TPUv2-8 | - |512 / 900K | 128 / 300K| 1.2M | 4 days # Dataset The pretraining data used for the new AraBERT model is also used for Arabic **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Preprocessing It is recommended to apply our preprocessing function before training/testing on any dataset. **Install farasapy to segment text for AraBERT v1 & v2 `pip install farasapy`** ```python from arabert.preprocess import ArabertPreprocessor model_name="bert-base-arabert" arabert_prep = ArabertPreprocessor(model_name=model_name) text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري" arabert_prep.preprocess(text) >>>"و+ لن نبالغ إذا قل +نا إن هاتف أو كمبيوتر ال+ مكتب في زمن +نا هذا ضروري" ``` ## Accepted_models ``` bert-base-arabertv01 bert-base-arabert bert-base-arabertv02 bert-base-arabertv2 bert-large-arabertv02 bert-large-arabertv2 araelectra-base aragpt2-base aragpt2-medium aragpt2-large aragpt2-mega ``` # TensorFlow 1.x models The TF1.x model are available in the HuggingFace models repo. You can download them as follows: - via git-lfs: clone all the models in a repo ```bash curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/aubmindlab/MODEL_NAME tar -C ./MODEL_NAME -zxvf /content/MODEL_NAME/tf1_model.tar.gz ``` where `MODEL_NAME` is any model under the `aubmindlab` name - via `wget`: - Go to the tf1_model.tar.gz file on huggingface.co/models/aubmindlab/MODEL_NAME. - copy the `oid sha256` - then run `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/INSERT_THE_SHA_HERE` (ex: for `aragpt2-base`: `wget https://cdn-lfs.huggingface.co/aubmindlab/aragpt2-base/3766fc03d7c2593ff2fb991d275e96b81b0ecb2098b71ff315611d052ce65248`) # If you used this model please cite us as : Google Scholar has our Bibtex wrong (missing name), use this instead ``` @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. ## Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <wfa07@mail.aub.edu> | <wissam.antoun@gmail.com> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <fgb06@mail.aub.edu> | <baly.fady@gmail.com>
microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
microsoft
"2024-01-14T18:43:38Z"
55,209
169
open_clip
[ "open_clip", "clip", "biology", "medical", "zero-shot-image-classification", "en", "arxiv:2303.00915", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-04-05T19:57:59Z"
--- language: en tags: - clip - biology - medical license: mit library_name: open_clip widget: - src: https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology example_title: squamous cell carcinoma histopathology - src: >- https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology example_title: adenocarcinoma histopathology - src: >- https://upload.wikimedia.org/wikipedia/commons/5/57/Left-sided_Pleural_Effusion.jpg candidate_labels: left-sided pleural effusion chest x-ray, right-sided pleural effusion chest x-ray, normal chest x-ray example_title: left-sided pleural effusion chest x-ray pipeline_tag: zero-shot-image-classification --- # BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning. It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations. It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering. BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches: ![](biomed-vlp-eval.svg) ## Citation ```bibtex @misc{https://doi.org/10.48550/arXiv.2303.00915, doi = {10.48550/ARXIV.2303.00915}, url = {https://arxiv.org/abs/2303.00915}, author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung}, title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing}, publisher = {arXiv}, year = {2023}, } ``` ## Model Use ### How to use Please refer to this [example notebook](https://aka.ms/biomedclip-example-notebook). ### Intended Use This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. #### Primary Intended Use The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain. #### Out-of-Scope Use **Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details. ## Data This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more. ## Limitations This model was developed using English corpora, and thus can be considered English-only. ## Further information Please refer to the corresponding paper, ["Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing"](https://aka.ms/biomedclip-paper) for additional details on the model training and evaluation.
klue/bert-base
klue
"2023-06-12T12:30:04Z"
55,180
37
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "korean", "klue", "ko", "arxiv:2105.09680", "arxiv:1910.09700", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: ko license: cc-by-sa-4.0 tags: - korean - klue mask_token: "[MASK]" widget: - text: 대한민국의 수도는 [MASK] 입니다. --- # KLUE BERT base ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** KLUE BERT base is a pre-trained BERT Model on Korean Language. The developers of KLUE BERT base developed the model in the context of the development of the [Korean Language Understanding Evaluation (KLUE) Benchmark](https://arxiv.org/pdf/2105.09680.pdf). - **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers - **Model Type:** Transformer-based language model - **Language(s):** Korean - **License:** cc-by-sa-4.0 - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model. - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2105.09680) - [GitHub Repo](https://github.com/KLUE-benchmark/KLUE) ## How to Get Started With the Model ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("klue/bert-base") tokenizer = AutoTokenizer.from_pretrained("klue/bert-base") ``` ## Uses #### Direct Use The model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE). #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The model developers discuss several ethical considerations related to the model in the [paper](https://arxiv.org/pdf/2105.09680.pdf), including: - Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering) - PII in the data used in the pretraining corpora (and efforts to pseudonymize the data) For ethical considerations related to the KLUE Benchmark, also see the [paper](https://arxiv.org/pdf/2105.09680.pdf). ## Training #### Training Data The authors use the following pretraining corpora for the model, described in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB. > > - **MODU:** [Modu Corpus](https://corpus.korean.go.kr) is a collection of Korean corpora distributed by [National Institute of Korean Languages](https://corpus.korean.go.kr/). It includes both formal articles (news and books) and colloquial text (dialogues). > - **CC-100-Kor:** [CC-100](https://data.statmt.org/cc-100/) is the large-scale multilingual web crawled corpora by using CC-Net ([Wenzek et al., 2020](https://www.aclweb.org/anthology/2020.lrec-1.494)). This is used for training XLM-R ([Conneau et al., 2020](https://aclanthology.org/2020.acl-main.747/)). We use the Korean portion from this corpora. > - **NAMUWIKI:** NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download [the dump](http://dump.thewiki.kr) created on March 2nd, 2020. > - **NEWSCRAWL:** NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform. > - **PETITION:** Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the [Blue House National Petition](https://www1.president.go.kr/petitions) published from [August 2017 to March 2019](https://ko-nlp.github.io/Korpora/en-docs/corpuslist/korean_petitions.html). The authors also describe ethical considerations related to the pretraining corpora in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). #### Training Procedure ##### Preprocessing The authors describe their preprocessing procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based [Korean Sentence Splitter (KSS)](https://github.com/likejazz/korean-sentence-splitter). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric ([reference](https://www.microsoft.com/en-us/research/publication/okapi-at-trec-3/)). ###### Tokenization The authors describe their tokenization procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf): > We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) ([Senrich et al., 2016](https://aclanthology.org/P16-1162/)) to get the final vocabulary. For morpheme segmentation, we use [Mecab-ko](https://bitbucket.org/eunjeon/mecab-ko), MeCab ([Kudo, 2006](https://taku910.github.io/mecab/)) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from [Huggingface Tokenizers library](https://github.com/huggingface/tokenizers). We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed. The training configurations are further described in the [paper](https://arxiv.org/pdf/2105.09680.pdf). ## Evaluation #### Testing Data, Factors and Metrics The model was evaluated on the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE). The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the [data card](https://huggingface.co/datasets/klue), [Github Repository](https://github.com/KLUE-benchmark/KLUE), and [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Task:** Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), **Metrics:** Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic. - **Task:** Semantic Textual Similarity (STS), **Metrics:** Pearsons' correlation coefficient (Pearson’ r) and F1 score - **Task:** Natural Language Inference (NLI), **Metrics:** Accuracy - **Task:** Named Entity Recognition (NER), **Metrics:** Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores - **Task:** Relation Extraction (RE), **Metrics:** Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes - **Task:** Dependency Parsing (DP), **Metrics:** Unlabeled attachment score (UAS) and labeled attachment score (LAS) - **Task:** Machine Reading Comprehension (MRC), **Metrics:** Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score. - **Task:** Dialogue State Tracking (DST), **Metrics:** Joint goal accuracy (JGA) and slot micro F1 score (Slot F1) #### Results | Task | TC | STS | | NLI | NER | | RE | | DP | | MRC | | DST | | | :---: |:---: | :---: | :---: |:---:| :---: | :---: |:---:| :---:| :---: |:---: | :---: | :---:| :---: | :---: | | Metric | F1 | Pearsons' r| F1 | ACC | Entity F1 | Char F1 | F1 | AUPRC| UAS | LAS | EM | ROUGE| JGA |Slot F1 | | | 85.73| 90.85 | 82.84 |81.63| 83.97 | 91.39 |66.44| 66.17| 89.96 |88.05 | 62.32 | 68.51| 46.64 | 91.61 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** TPU v3-8 - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/pdf/2105.09680.pdf) for details on the modeling architecture (BERT), objective, compute infrastructure, and training details. ## Citation Information ```bibtex @misc{park2021klue, title={KLUE: Korean Language Understanding Evaluation}, author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho}, year={2021}, eprint={2105.09680}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
backyardai/L3-Blackfall-Summanus-v0.1-15B-GGUF
backyardai
"2024-07-01T06:54:20Z"
55,179
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "base_model:kromeurus/L3-Blackfall-Summanus-v0.1-15B", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-07-01T05:57:40Z"
--- license: cc-by-sa-4.0 library_name: transformers tags: - mergekit - merge - not-for-all-audiences base_model: kromeurus/L3-Blackfall-Summanus-v0.1-15B model_name: L3-Blackfall-Summanus-v0.1-15B-GGUF quantized_by: brooketh parameter_count: 15009845248 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # L3 Blackfall Summanus V0.1 15B - **Creator:** [kromeurus](https://huggingface.co/kromeurus/) - **Original:** [L3 Blackfall Summanus V0.1 15B](https://huggingface.co/kromeurus/L3-Blackfall-Summanus-v0.1-15B) - **Date Created:** 2024-06-30 - **Trained Context:** 8192 tokens - **Description:** Aims at being a more human and verbose version of Aethora v2 by ZeusLabs that's also better at eRP. Merges Aethora with Sunfall Stheno and Jamet Blackroot, providing a good balance between the chaos and literacy. Can get very explicit once in eRP. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF
mradermacher
"2024-06-28T03:19:38Z"
55,157
1
transformers
[ "transformers", "gguf", "en", "dataset:allenai/ai2_arc", "dataset:unalignment/spicy-3.1", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:boolq", "dataset:jondurbin/cinematika-v0.1", "dataset:drop", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:cais/mmlu", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:spider", "dataset:squad_v2", "dataset:migtissera/Synthia-v1.3", "dataset:datasets/winogrande", "dataset:nvidia/HelpSteer", "dataset:Intel/orca_dpo_pairs", "dataset:unalignment/toxic-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned", "dataset:LDJnr/Capybara", "dataset:JULIELab/EmoBank", "dataset:kingbri/PIPPA-shareGPT", "base_model:one-man-army/UNA-34Beagles-32K-bf16-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T15:54:05Z"
--- base_model: one-man-army/UNA-34Beagles-32K-bf16-v1 datasets: - allenai/ai2_arc - unalignment/spicy-3.1 - codeparrot/apps - facebook/belebele - boolq - jondurbin/cinematika-v0.1 - drop - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - cais/mmlu - Muennighoff/natural-instructions - openbookqa - piqa - Vezora/Tested-22k-Python-Alpaca - cakiki/rosetta-code - Open-Orca/SlimOrca - spider - squad_v2 - migtissera/Synthia-v1.3 - datasets/winogrande - nvidia/HelpSteer - Intel/orca_dpo_pairs - unalignment/toxic-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - allenai/ultrafeedback_binarized_cleaned - Squish42/bluemoon-fandom-1-1-rp-cleaned - LDJnr/Capybara - JULIELab/EmoBank - kingbri/PIPPA-shareGPT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/UNA-34Beagles-32K-bf16-v1-i1-GGUF/resolve/main/UNA-34Beagles-32K-bf16-v1.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
timm/eva_large_patch14_336.in22k_ft_in1k
timm
"2024-02-10T23:27:56Z"
55,133
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
"2022-12-22T07:09:58Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for eva_large_patch14_336.in22k_ft_in1k An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.5 - GMACs: 191.1 - Activations (M): 270.2 - Image size: 336 x 336 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_large_patch14_336.in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_large_patch14_336.in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 577, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
airesearch/wangchanberta-base-att-spm-uncased
airesearch
"2023-03-19T02:31:42Z"
55,086
31
transformers
[ "transformers", "pytorch", "safetensors", "camembert", "fill-mask", "th", "arxiv:1907.11692", "arxiv:1801.06146", "arxiv:1808.06226", "arxiv:2101.09635", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: th widget: - text: "ผู้ใช้งานท่าอากาศยานนานาชาติ<mask>มีกว่าสามล้านคน<pad>" --- # WangchanBERTa base model: `wangchanberta-base-att-spm-uncased` <br> Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB). The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers). <br> ## Model description <br> The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692). <br> ## Intended uses & limitations <br> You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task. <br> **Multiclass text classification** - `wisesight_sentiment` 4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets. - `wongnai_reivews` Users' review rating classification task (scale is ranging from 1 to 5) - `generated_reviews_enth` : (`review_star` as label) Generated users' review rating classification task (scale is ranging from 1 to 5). **Multilabel text classification** - `prachathai67k` Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k). **Token classification** - `thainer` Named-entity recognition tagging with 13 named-entities as described in this [page](https://huggingface.co/datasets/thainer). - `lst20` : NER NER and POS tagging Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this [page](https://huggingface.co/datasets/lst20). <br> ## How to use <br> The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko) <br> ## Training data `wangchanberta-base-att-spm-uncased` model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB. ### Preprocessing Texts are preprocessed with the following rules: - Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia. - Replace line breaks with spaces. - Replace more than one spaces with a single space - Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). - Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer. - Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English. - Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>. <br> Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](https://arxiv.org/abs/1808.06226) to train SentencePiece unigram model. The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set. The length of each sequence is limited up to 416 subword tokens. Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token. <br> **Train/Val/Test splits** After preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU), 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters. <br> **Pretraining** The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed <br> **BibTeX entry and citation info** ``` @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup
laion
"2024-01-16T22:19:59Z"
55,017
15
open_clip
[ "open_clip", "tensorboard", "safetensors", "zero-shot-image-classification", "clip", "arxiv:2201.03545", "arxiv:2210.08402", "arxiv:1910.04867", "license:mit", "region:us" ]
zero-shot-image-classification
"2023-02-11T01:35:52Z"
--- tags: - zero-shot-image-classification - clip license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a soup (weight average) of 3 fine-tunes of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It is an average of 3 fine-tunes from the final checkpoint of the original 256x256 training run w/ an additional ~2-3B samples for each fine-tune and a lower learning rate. Each fine-tune was a different learning rate (1e-4, 6e-5, 5e-5), and diff # of samples (3.2B, 2B, 2.5B). At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ``` @InProceedings{pmlr-v162-wortsman22a, title = {Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time}, author = {Wortsman, Mitchell and Ilharco, Gabriel and Gadre, Samir Ya and Roelofs, Rebecca and Gontijo-Lopes, Raphael and Morcos, Ari S and Namkoong, Hongseok and Farhadi, Ali and Carmon, Yair and Kornblith, Simon and Schmidt, Ludwig}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23965--23998}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wortsman22a/wortsman22a.pdf}, url = {https://proceedings.mlr.press/v162/wortsman22a.html} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
skt/kobert-base-v1
skt
"2021-07-01T07:16:05Z"
54,873
23
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
Please refer here. https://github.com/SKTBrain/KoBERT
Nondzu/zephyr-speakleash-010-pl-3072-32-16-0.01
Nondzu
"2024-03-03T11:50:28Z"
54,871
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-03T10:10:17Z"
--- license: mit --- [speakleash.org](https://speakleash.org) ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33
MoritzLaurer
"2024-06-03T10:27:59Z"
54,856
19
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2312.17543", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2023-11-23T22:22:02Z"
--- language: - en tags: - text-classification - zero-shot-classification pipeline_tag: zero-shot-classification library_name: transformers license: mit --- # Model description: deberta-v3-base-zeroshot-v1.1-all-33 The model is designed for zero-shot classification with the Hugging Face pipeline. The model can do one universal classification task: determine whether a hypothesis is "true" or "not true" given a text (`entailment` vs. `not_entailment`). This task format is based on the Natural Language Inference task (NLI). The task is so universal that any classification task can be reformulated into this task. A detailed description of how the model was trained and how it can be used is available in this [paper](https://arxiv.org/pdf/2312.17543.pdf). ## Training data The model was trained on a mixture of __33 datasets and 387 classes__ that have been reformatted into this universal format. 1. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling" 2. 28 classification tasks reformatted into the universal NLI format. ~51k cleaned texts were used to avoid overfitting: 'amazonpolarity', 'imdb', 'appreviews', 'yelpreviews', 'rottentomatoes', 'emotiondair', 'emocontext', 'empathetic', 'financialphrasebank', 'banking77', 'massive', 'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate', 'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent', 'agnews', 'yahootopics', 'trueteacher', 'spam', 'wellformedquery', 'manifesto', 'capsotu'. See details on each dataset here: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`) as opposed to three classes (entailment/neutral/contradiction) The model was only trained on English data. For __multilingual use-cases__, I recommend machine translating texts to English with libraries like [EasyNMT](https://github.com/UKPLab/EasyNMT). English-only models tend to perform better than multilingual models and validation with English data can be easier if you don't speak all languages in your corpus. ### How to use the model #### Simple zero-shot classification pipeline ```python #!pip install transformers[sentencepiece] from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" hypothesis_template = "This example is about {}" classes_verbalized = ["politics", "economy", "entertainment", "environment"] zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33") output = zeroshot_classifier(text, classes_verbalized, hypothesis_template=hypothesis_template, multi_label=False) print(output) ``` ### Details on data and training The code for preparing the data and training & evaluating the model is fully open-source here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main Hyperparameters and other details are available in this Weights & Biases repo: https://wandb.ai/moritzlaurer/deberta-v3-base-zeroshot-v1-1-all-33/table?workspace=user- ## Metrics Balanced accuracy is reported for all datasets. `deberta-v3-base-zeroshot-v1.1-all-33` was trained on all datasets, with only maximum 500 texts per class to avoid overfitting. The metrics on these datasets are therefore not strictly zeroshot, as the model has seen some data for each task during training. `deberta-v3-base-zeroshot-v1.1-heldout` indicates zeroshot performance on the respective dataset. To calculate these zeroshot metrics, the pipeline was run 28 times, each time with one dataset held out from training to simulate a zeroshot setup. ![figure_base_v1.1](https://raw.githubusercontent.com/MoritzLaurer/zeroshot-classifier/main/results/fig_base_v1.1.png) | | deberta-v3-base-mnli-fever-anli-ling-wanli-binary | deberta-v3-base-zeroshot-v1.1-heldout | deberta-v3-base-zeroshot-v1.1-all-33 | |:---------------------------|---------------------------:|----------------------------------------:|---------------------------------------:| | datasets mean (w/o nli) | 62 | 70.7 | 84 | | amazonpolarity (2) | 91.7 | 95.7 | 96 | | imdb (2) | 87.3 | 93.6 | 94.5 | | appreviews (2) | 91.3 | 92.2 | 94.4 | | yelpreviews (2) | 95.1 | 97.4 | 98.3 | | rottentomatoes (2) | 83 | 88.7 | 90.8 | | emotiondair (6) | 46.5 | 42.6 | 74.5 | | emocontext (4) | 58.5 | 57.4 | 81.2 | | empathetic (32) | 31.3 | 37.3 | 52.7 | | financialphrasebank (3) | 78.3 | 68.9 | 91.2 | | banking77 (72) | 18.9 | 46 | 73.7 | | massive (59) | 44 | 56.6 | 78.9 | | wikitoxic_toxicaggreg (2) | 73.7 | 82.5 | 90.5 | | wikitoxic_obscene (2) | 77.3 | 91.6 | 92.6 | | wikitoxic_threat (2) | 83.5 | 95.2 | 96.7 | | wikitoxic_insult (2) | 79.6 | 91 | 91.6 | | wikitoxic_identityhate (2) | 83.9 | 88 | 94.4 | | hateoffensive (3) | 55.2 | 66.1 | 86 | | hatexplain (3) | 44.1 | 57.6 | 76.9 | | biasframes_offensive (2) | 56.8 | 85.4 | 87 | | biasframes_sex (2) | 85.4 | 87 | 91.8 | | biasframes_intent (2) | 56.3 | 85.2 | 87.8 | | agnews (4) | 77.3 | 80 | 90.5 | | yahootopics (10) | 53.6 | 57.7 | 72.8 | | trueteacher (2) | 51.4 | 49.5 | 82.4 | | spam (2) | 51.8 | 50 | 97.2 | | wellformedquery (2) | 49.9 | 52.5 | 77.2 | | manifesto (56) | 5.8 | 18.9 | 39.1 | | capsotu (21) | 25.2 | 64 | 72.5 | | mnli_m (2) | 92.4 | nan | 92.7 | | mnli_mm (2) | 92.4 | nan | 92.5 | | fevernli (2) | 89 | nan | 89.1 | | anli_r1 (2) | 79.4 | nan | 80 | | anli_r2 (2) | 68.4 | nan | 68.4 | | anli_r3 (2) | 66.2 | nan | 68 | | wanli (2) | 81.6 | nan | 81.8 | | lingnli (2) | 88.4 | nan | 88.4 | ## Limitations and bias The model can only do text classification tasks. Please consult the original DeBERTa paper and the papers for the different datasets for potential biases. ## License The base model (DeBERTa-v3) is published under the MIT license. The datasets the model was fine-tuned on are published under a diverse set of licenses. The following table provides an overview of the non-NLI datasets used for fine-tuning, information on licenses, the underlying papers etc.: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv ## Citation If you use this model academically, please cite: ``` @misc{laurer_building_2023, title = {Building {Efficient} {Universal} {Classifiers} with {Natural} {Language} {Inference}}, url = {http://arxiv.org/abs/2312.17543}, doi = {10.48550/arXiv.2312.17543}, abstract = {Generative Large Language Models (LLMs) have become the mainstream choice for fewshot and zeroshot learning thanks to the universality of text generation. Many users, however, do not need the broad capabilities of generative LLMs when they only want to automate a classification task. Smaller BERT-like models can also learn universal tasks, which allow them to do any text classification task without requiring fine-tuning (zeroshot classification) or to learn new tasks with only a few examples (fewshot), while being significantly more efficient than generative LLMs. This paper (1) explains how Natural Language Inference (NLI) can be used as a universal classification task that follows similar principles as instruction fine-tuning of generative LLMs, (2) provides a step-by-step guide with reusable Jupyter notebooks for building a universal classifier, and (3) shares the resulting universal classifier that is trained on 33 datasets with 389 diverse classes. Parts of the code we share has been used to train our older zeroshot classifiers that have been downloaded more than 55 million times via the Hugging Face Hub as of December 2023. Our new classifier improves zeroshot performance by 9.4\%.}, urldate = {2024-01-05}, publisher = {arXiv}, author = {Laurer, Moritz and van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper}, month = dec, year = {2023}, note = {arXiv:2312.17543 [cs]}, keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language}, } ``` ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers can have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. Also make sure to install sentencepiece to avoid tokenizer errors. Run: `pip install transformers[sentencepiece]` or `pip install sentencepiece` ### Hypotheses used for classification The hypotheses in the tables below were used to fine-tune the model. Inspecting them can help users get a feeling for which type of hypotheses and tasks the model was trained on. You can formulate your own hypotheses by changing the `hypothesis_template` of the zeroshot pipeline. For example: ```python from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" hypothesis_template = "Merkel is the leader of the party: {}" classes_verbalized = ["CDU", "SPD", "Greens"] zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33") output = zeroshot_classifier(text, classes_verbalised, hypothesis_template=hypothesis_template, multi_label=False) print(output) ``` Note that a few rows in the `massive` and `banking77` datasets contain `nan` because some classes were so ambiguous/unclear that I excluded them from the data. #### wellformedquery | label | hypothesis | |:----------------|:-----------------------------------------------| | not_well_formed | This example is not a well formed Google query | | well_formed | This example is a well formed Google query. | #### biasframes_sex | label | hypothesis | |:--------|:-----------------------------------------------------------| | not_sex | This example does not contain allusions to sexual content. | | sex | This example contains allusions to sexual content. | #### biasframes_intent | label | hypothesis | |:-----------|:-----------------------------------------------------------------| | intent | The intent of this example is to be offensive/disrespectful. | | not_intent | The intent of this example is not to be offensive/disrespectful. | #### biasframes_offensive | label | hypothesis | |:--------------|:-------------------------------------------------------------------------| | not_offensive | This example could not be considered offensive, disrespectful, or toxic. | | offensive | This example could be considered offensive, disrespectful, or toxic. | #### financialphrasebank | label | hypothesis | |:---------|:--------------------------------------------------------------------------| | negative | The sentiment in this example is negative from an investor's perspective. | | neutral | The sentiment in this example is neutral from an investor's perspective. | | positive | The sentiment in this example is positive from an investor's perspective. | #### rottentomatoes | label | hypothesis | |:---------|:-----------------------------------------------------------------------| | negative | The sentiment in this example rotten tomatoes movie review is negative | | positive | The sentiment in this example rotten tomatoes movie review is positive | #### amazonpolarity | label | hypothesis | |:---------|:----------------------------------------------------------------| | negative | The sentiment in this example amazon product review is negative | | positive | The sentiment in this example amazon product review is positive | #### imdb | label | hypothesis | |:---------|:------------------------------------------------------------| | negative | The sentiment in this example imdb movie review is negative | | positive | The sentiment in this example imdb movie review is positive | #### appreviews | label | hypothesis | |:---------|:------------------------------------------------------| | negative | The sentiment in this example app review is negative. | | positive | The sentiment in this example app review is positive. | #### yelpreviews | label | hypothesis | |:---------|:-------------------------------------------------------| | negative | The sentiment in this example yelp review is negative. | | positive | The sentiment in this example yelp review is positive. | #### wikitoxic_toxicaggregated | label | hypothesis | |:--------------------|:----------------------------------------------------------------| | not_toxicaggregated | This example wikipedia comment does not contain toxic language. | | toxicaggregated | This example wikipedia comment contains toxic language. | #### wikitoxic_obscene | label | hypothesis | |:------------|:------------------------------------------------------------------| | not_obscene | This example wikipedia comment does not contain obscene language. | | obscene | This example wikipedia comment contains obscene language. | #### wikitoxic_threat | label | hypothesis | |:-----------|:----------------------------------------------------------| | not_threat | This example wikipedia comment does not contain a threat. | | threat | This example wikipedia comment contains a threat. | #### wikitoxic_insult | label | hypothesis | |:-----------|:-----------------------------------------------------------| | insult | This example wikipedia comment contains an insult. | | not_insult | This example wikipedia comment does not contain an insult. | #### wikitoxic_identityhate | label | hypothesis | |:-----------------|:---------------------------------------------------------------| | identityhate | This example wikipedia comment contains identity hate. | | not_identityhate | This example wikipedia comment does not contain identity hate. | #### hateoffensive | label | hypothesis | |:------------|:------------------------------------------------------------------------| | hate_speech | This example tweet contains hate speech. | | neither | This example tweet contains neither offensive language nor hate speech. | | offensive | This example tweet contains offensive language without hate speech. | #### hatexplain | label | hypothesis | |:------------|:-------------------------------------------------------------------------------------------| | hate_speech | This example text from twitter or gab contains hate speech. | | neither | This example text from twitter or gab contains neither offensive language nor hate speech. | | offensive | This example text from twitter or gab contains offensive language without hate speech. | #### spam | label | hypothesis | |:---------|:------------------------------| | not_spam | This example sms is not spam. | | spam | This example sms is spam. | #### emotiondair | label | hypothesis | |:---------|:---------------------------------------------------| | anger | This example tweet expresses the emotion: anger | | fear | This example tweet expresses the emotion: fear | | joy | This example tweet expresses the emotion: joy | | love | This example tweet expresses the emotion: love | | sadness | This example tweet expresses the emotion: sadness | | surprise | This example tweet expresses the emotion: surprise | #### emocontext | label | hypothesis | |:--------|:--------------------------------------------------------------------------------------| | angry | This example tweet expresses the emotion: anger | | happy | This example tweet expresses the emotion: happiness | | others | This example tweet does not express any of the emotions: anger, sadness, or happiness | | sad | This example tweet expresses the emotion: sadness | #### empathetic | label | hypothesis | |:-------------|:-----------------------------------------------------------| | afraid | The main emotion of this example dialogue is: afraid | | angry | The main emotion of this example dialogue is: angry | | annoyed | The main emotion of this example dialogue is: annoyed | | anticipating | The main emotion of this example dialogue is: anticipating | | anxious | The main emotion of this example dialogue is: anxious | | apprehensive | The main emotion of this example dialogue is: apprehensive | | ashamed | The main emotion of this example dialogue is: ashamed | | caring | The main emotion of this example dialogue is: caring | | confident | The main emotion of this example dialogue is: confident | | content | The main emotion of this example dialogue is: content | | devastated | The main emotion of this example dialogue is: devastated | | disappointed | The main emotion of this example dialogue is: disappointed | | disgusted | The main emotion of this example dialogue is: disgusted | | embarrassed | The main emotion of this example dialogue is: embarrassed | | excited | The main emotion of this example dialogue is: excited | | faithful | The main emotion of this example dialogue is: faithful | | furious | The main emotion of this example dialogue is: furious | | grateful | The main emotion of this example dialogue is: grateful | | guilty | The main emotion of this example dialogue is: guilty | | hopeful | The main emotion of this example dialogue is: hopeful | | impressed | The main emotion of this example dialogue is: impressed | | jealous | The main emotion of this example dialogue is: jealous | | joyful | The main emotion of this example dialogue is: joyful | | lonely | The main emotion of this example dialogue is: lonely | | nostalgic | The main emotion of this example dialogue is: nostalgic | | prepared | The main emotion of this example dialogue is: prepared | | proud | The main emotion of this example dialogue is: proud | | sad | The main emotion of this example dialogue is: sad | | sentimental | The main emotion of this example dialogue is: sentimental | | surprised | The main emotion of this example dialogue is: surprised | | terrified | The main emotion of this example dialogue is: terrified | | trusting | The main emotion of this example dialogue is: trusting | #### agnews | label | hypothesis | |:---------|:-------------------------------------------------------| | Business | This example news text is about business news | | Sci/Tech | This example news text is about science and technology | | Sports | This example news text is about sports | | World | This example news text is about world news | #### yahootopics | label | hypothesis | |:-----------------------|:---------------------------------------------------------------------------------------------------| | Business & Finance | This example question from the Yahoo Q&A forum is categorized in the topic: Business & Finance | | Computers & Internet | This example question from the Yahoo Q&A forum is categorized in the topic: Computers & Internet | | Education & Reference | This example question from the Yahoo Q&A forum is categorized in the topic: Education & Reference | | Entertainment & Music | This example question from the Yahoo Q&A forum is categorized in the topic: Entertainment & Music | | Family & Relationships | This example question from the Yahoo Q&A forum is categorized in the topic: Family & Relationships | | Health | This example question from the Yahoo Q&A forum is categorized in the topic: Health | | Politics & Government | This example question from the Yahoo Q&A forum is categorized in the topic: Politics & Government | | Science & Mathematics | This example question from the Yahoo Q&A forum is categorized in the topic: Science & Mathematics | | Society & Culture | This example question from the Yahoo Q&A forum is categorized in the topic: Society & Culture | | Sports | This example question from the Yahoo Q&A forum is categorized in the topic: Sports | #### massive | label | hypothesis | |:-------------------------|:------------------------------------------------------------------------------------------| | alarm_query | The example utterance is a query about alarms. | | alarm_remove | The intent of this example utterance is to remove an alarm. | | alarm_set | The intent of the example utterance is to set an alarm. | | audio_volume_down | The intent of the example utterance is to lower the volume. | | audio_volume_mute | The intent of this example utterance is to mute the volume. | | audio_volume_other | The example utterance is related to audio volume. | | audio_volume_up | The intent of this example utterance is turning the audio volume up. | | calendar_query | The example utterance is a query about a calendar. | | calendar_remove | The intent of the example utterance is to remove something from a calendar. | | calendar_set | The intent of this example utterance is to set something in a calendar. | | cooking_query | The example utterance is a query about cooking. | | cooking_recipe | This example utterance is about cooking recipies. | | datetime_convert | The example utterance is related to date time changes or conversion. | | datetime_query | The intent of this example utterance is a datetime query. | | email_addcontact | The intent of this example utterance is adding an email address to contacts. | | email_query | The example utterance is a query about emails. | | email_querycontact | The intent of this example utterance is to query contact details. | | email_sendemail | The intent of the example utterance is to send an email. | | general_greet | This example utterance is a general greet. | | general_joke | The intent of the example utterance is to hear a joke. | | general_quirky | nan | | iot_cleaning | The intent of the example utterance is for an IoT device to start cleaning. | | iot_coffee | The intent of this example utterance is for an IoT device to make coffee. | | iot_hue_lightchange | The intent of this example utterance is changing the light. | | iot_hue_lightdim | The intent of the example utterance is to dim the lights. | | iot_hue_lightoff | The example utterance is related to turning the lights off. | | iot_hue_lighton | The example utterance is related to turning the lights on. | | iot_hue_lightup | The intent of this example utterance is to brighten lights. | | iot_wemo_off | The intent of this example utterance is turning an IoT device off. | | iot_wemo_on | The intent of the example utterance is to turn an IoT device on. | | lists_createoradd | The example utterance is related to creating or adding to lists. | | lists_query | The example utterance is a query about a list. | | lists_remove | The intent of this example utterance is to remove a list or remove something from a list. | | music_dislikeness | The intent of this example utterance is signalling music dislike. | | music_likeness | The example utterance is related to liking music. | | music_query | The example utterance is a query about music. | | music_settings | The intent of the example utterance is to change music settings. | | news_query | The example utterance is a query about the news. | | play_audiobook | The example utterance is related to playing audiobooks. | | play_game | The intent of this example utterance is to start playing a game. | | play_music | The intent of this example utterance is for an IoT device to play music. | | play_podcasts | The example utterance is related to playing podcasts. | | play_radio | The intent of the example utterance is to play something on the radio. | | qa_currency | This example utteranceis about currencies. | | qa_definition | The example utterance is a query about a definition. | | qa_factoid | The example utterance is a factoid question. | | qa_maths | The example utterance is a question about maths. | | qa_stock | This example utterance is about stocks. | | recommendation_events | This example utterance is about event recommendations. | | recommendation_locations | The intent of this example utterance is receiving recommendations for good locations. | | recommendation_movies | This example utterance is about movie recommendations. | | social_post | The example utterance is about social media posts. | | social_query | The example utterance is a query about a social network. | | takeaway_order | The intent of this example utterance is to order takeaway food. | | takeaway_query | This example utterance is about takeaway food. | | transport_query | The example utterance is a query about transport or travels. | | transport_taxi | The intent of this example utterance is to get a taxi. | | transport_ticket | This example utterance is about transport tickets. | | transport_traffic | This example utterance is about transport or traffic. | | weather_query | This example utterance is a query about the wheather. | #### banking77 | label | hypothesis | |:-------------------------------------------------|:----------------------------------------------------------------------------------------------------------| | Refund_not_showing_up | This customer example message is about a refund not showing up. | | activate_my_card | This banking customer example message is about activating a card. | | age_limit | This banking customer example message is related to age limits. | | apple_pay_or_google_pay | This banking customer example message is about apple pay or google pay | | atm_support | This banking customer example message requests ATM support. | | automatic_top_up | This banking customer example message is about automatic top up. | | balance_not_updated_after_bank_transfer | This banking customer example message is about a balance not updated after a transfer. | | balance_not_updated_after_cheque_or_cash_deposit | This banking customer example message is about a balance not updated after a cheque or cash deposit. | | beneficiary_not_allowed | This banking customer example message is related to a beneficiary not being allowed or a failed transfer. | | cancel_transfer | This banking customer example message is related to the cancellation of a transfer. | | card_about_to_expire | This banking customer example message is related to the expiration of a card. | | card_acceptance | This banking customer example message is related to the scope of acceptance of a card. | | card_arrival | This banking customer example message is about the arrival of a card. | | card_delivery_estimate | This banking customer example message is about a card delivery estimate or timing. | | card_linking | nan | | card_not_working | This banking customer example message is about a card not working. | | card_payment_fee_charged | This banking customer example message is about a card payment fee. | | card_payment_not_recognised | This banking customer example message is about a payment the customer does not recognise. | | card_payment_wrong_exchange_rate | This banking customer example message is about a wrong exchange rate. | | card_swallowed | This banking customer example message is about a card swallowed by a machine. | | cash_withdrawal_charge | This banking customer example message is about a cash withdrawal charge. | | cash_withdrawal_not_recognised | This banking customer example message is about an unrecognised cash withdrawal. | | change_pin | This banking customer example message is about changing a pin code. | | compromised_card | This banking customer example message is about a compromised card. | | contactless_not_working | This banking customer example message is about contactless not working | | country_support | This banking customer example message is about country-specific support. | | declined_card_payment | This banking customer example message is about a declined card payment. | | declined_cash_withdrawal | This banking customer example message is about a declined cash withdrawal. | | declined_transfer | This banking customer example message is about a declined transfer. | | direct_debit_payment_not_recognised | This banking customer example message is about an unrecognised direct debit payment. | | disposable_card_limits | This banking customer example message is about the limits of disposable cards. | | edit_personal_details | This banking customer example message is about editing personal details. | | exchange_charge | This banking customer example message is about exchange rate charges. | | exchange_rate | This banking customer example message is about exchange rates. | | exchange_via_app | nan | | extra_charge_on_statement | This banking customer example message is about an extra charge. | | failed_transfer | This banking customer example message is about a failed transfer. | | fiat_currency_support | This banking customer example message is about fiat currency support | | get_disposable_virtual_card | This banking customer example message is about getting a disposable virtual card. | | get_physical_card | nan | | getting_spare_card | This banking customer example message is about getting a spare card. | | getting_virtual_card | This banking customer example message is about getting a virtual card. | | lost_or_stolen_card | This banking customer example message is about a lost or stolen card. | | lost_or_stolen_phone | This banking customer example message is about a lost or stolen phone. | | order_physical_card | This banking customer example message is about ordering a card. | | passcode_forgotten | This banking customer example message is about a forgotten passcode. | | pending_card_payment | This banking customer example message is about a pending card payment. | | pending_cash_withdrawal | This banking customer example message is about a pending cash withdrawal. | | pending_top_up | This banking customer example message is about a pending top up. | | pending_transfer | This banking customer example message is about a pending transfer. | | pin_blocked | This banking customer example message is about a blocked pin. | | receiving_money | This banking customer example message is about receiving money. | | request_refund | This banking customer example message is about a refund request. | | reverted_card_payment? | This banking customer example message is about reverting a card payment. | | supported_cards_and_currencies | nan | | terminate_account | This banking customer example message is about terminating an account. | | top_up_by_bank_transfer_charge | nan | | top_up_by_card_charge | This banking customer example message is about the charge for topping up by card. | | top_up_by_cash_or_cheque | This banking customer example message is about topping up by cash or cheque. | | top_up_failed | This banking customer example message is about top up issues or failures. | | top_up_limits | This banking customer example message is about top up limitations. | | top_up_reverted | This banking customer example message is about issues with topping up. | | topping_up_by_card | This banking customer example message is about topping up by card. | | transaction_charged_twice | This banking customer example message is about a transaction charged twice. | | transfer_fee_charged | This banking customer example message is about an issue with a transfer fee charge. | | transfer_into_account | This banking customer example message is about transfers into the customer's own account. | | transfer_not_received_by_recipient | This banking customer example message is about a transfer that has not arrived yet. | | transfer_timing | This banking customer example message is about transfer timing. | | unable_to_verify_identity | This banking customer example message is about an issue with identity verification. | | verify_my_identity | This banking customer example message is about identity verification. | | verify_source_of_funds | This banking customer example message is about the source of funds. | | verify_top_up | This banking customer example message is about verification and top ups | | virtual_card_not_working | This banking customer example message is about a virtual card not working | | visa_or_mastercard | This banking customer example message is about types of bank cards. | | why_verify_identity | This banking customer example message questions why identity verification is necessary. | | wrong_amount_of_cash_received | This banking customer example message is about a wrong amount of cash received. | | wrong_exchange_rate_for_cash_withdrawal | This banking customer example message is about a wrong exchange rate for a cash withdrawal. | #### trueteacher | label | hypothesis | |:-----------------------|:---------------------------------------------------------------------| | factually_consistent | The example summary is factually consistent with the full article. | | factually_inconsistent | The example summary is factually inconsistent with the full article. | #### capsotu | label | hypothesis | |:----------------------|:----------------------------------------------------------------------------------------------------------| | Agriculture | This example text from a US presidential speech is about agriculture | | Civil Rights | This example text from a US presidential speech is about civil rights or minorities or civil liberties | | Culture | This example text from a US presidential speech is about cultural policy | | Defense | This example text from a US presidential speech is about defense or military | | Domestic Commerce | This example text from a US presidential speech is about banking or finance or commerce | | Education | This example text from a US presidential speech is about education | | Energy | This example text from a US presidential speech is about energy or electricity or fossil fuels | | Environment | This example text from a US presidential speech is about the environment or water or waste or pollution | | Foreign Trade | This example text from a US presidential speech is about foreign trade | | Government Operations | This example text from a US presidential speech is about government operations or administration | | Health | This example text from a US presidential speech is about health | | Housing | This example text from a US presidential speech is about community development or housing issues | | Immigration | This example text from a US presidential speech is about migration | | International Affairs | This example text from a US presidential speech is about international affairs or foreign aid | | Labor | This example text from a US presidential speech is about employment or labour | | Law and Crime | This example text from a US presidential speech is about law, crime or family issues | | Macroeconomics | This example text from a US presidential speech is about macroeconomics | | Public Lands | This example text from a US presidential speech is about public lands or water management | | Social Welfare | This example text from a US presidential speech is about social welfare | | Technology | This example text from a US presidential speech is about space or science or technology or communications | | Transportation | This example text from a US presidential speech is about transportation | #### manifesto | label | hypothesis | |:-------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Agriculture and Farmers: Positive | This example text from a political party manifesto is positive towards policies for agriculture and farmers | | Anti-Growth Economy: Positive | This example text from a political party manifesto is in favour of anti-growth politics | | Anti-Imperialism | This example text from a political party manifesto is anti-imperialistic, for example against controlling other countries and for greater self-government of colonies | | Centralisation | This example text from a political party manifesto is in favour of political centralisation | | Civic Mindedness: Positive | This example text from a political party manifesto is positive towards national solidarity, civil society or appeals for public spiritedness or against anti-social attitudes | | Constitutionalism: Negative | This example text from a political party manifesto is positive towards constitutionalism | | Constitutionalism: Positive | This example text from a political party manifesto is positive towards constitutionalism and the status quo of the constitution | | Controlled Economy | This example text from a political party manifesto is supportive of direct government control of the economy, e.g. price control or minimum wages | | Corporatism/Mixed Economy | This example text from a political party manifesto is positive towards cooperation of government, employers, and trade unions simultaneously | | Culture: Positive | This example text from a political party manifesto is in favour of cultural policies or leisure facilities, for example museus, libraries or public sport clubs | | Decentralization | This example text from a political party manifesto is for decentralisation or federalism | | Democracy | This example text from a political party manifesto favourably mentions democracy or democratic procedures or institutions | | Economic Goals | This example text from a political party manifesto is a broad/general statement on economic goals without specifics | | Economic Growth: Positive | This example text from a political party manifesto is supportive of economic growth, for example facilitation of more production or government aid for growth | | Economic Orthodoxy | This example text from a political party manifesto is for economic orthodoxy, for example reduction of budget deficits, thrift or a strong currency | | Economic Planning | This example text from a political party manifesto is positive towards government economic planning, e.g. policy plans or strategies | | Education Expansion | This example text from a political party manifesto is about the need to expand/improve policy on education | | Education Limitation | This example text from a political party manifesto is sceptical towards state expenditure on education, for example in favour of study fees or private schools | | Environmental Protection | This example text from a political party manifesto is in favour of environmental protection, e.g. fighting climate change or 'green' policies or preservation of natural resources or animal rights | | Equality: Positive | This example text from a political party manifesto is positive towards equality or social justice, e.g. protection of underprivileged groups or fair distribution of resources | | European Community/Union: Negative | This example text from a political party manifesto negatively mentions the EU or European Community | | European Community/Union: Positive | This example text from a political party manifesto is positive towards the EU or European Community, for example EU expansion and integration | | Foreign Special Relationships: Negative | This example text from a political party manifesto is negative towards particular countries | | Foreign Special Relationships: Positive | This example text from a political party manifesto is positive towards particular countries | | Free Market Economy | This example text from a political party manifesto is in favour of a free market economy and capitalism | | Freedom and Human Rights | This example text from a political party manifesto is in favour of freedom and human rights, for example freedom of speech, assembly or against state coercion or for individualism | | Governmental and Administrative Efficiency | This example text from a political party manifesto is in favour of efficiency in government/administration, for example by restructuring civil service or improving bureaucracy | | Incentives: Positive | This example text from a political party manifesto is favourable towards supply side economic policies supporting businesses, for example for incentives like subsidies or tax breaks | | Internationalism: Negative | This example text from a political party manifesto is sceptical of internationalism, for example negative towards international cooperation, in favour of national sovereignty and unilaterialism | | Internationalism: Positive | This example text from a political party manifesto is in favour of international cooperation with other countries, for example mentions the need for aid to developing countries, or global governance | | Keynesian Demand Management | This example text from a political party manifesto is for keynesian demand management and demand side economic policies | | Labour Groups: Negative | This example text from a political party manifesto is negative towards labour groups and unions | | Labour Groups: Positive | This example text from a political party manifesto is positive towards labour groups, for example for good working conditions, fair wages or unions | | Law and Order: Positive | This example text from a political party manifesto is positive towards law and order and strict law enforcement | | Market Regulation | This example text from a political party manifesto is supports market regulation for a fair and open market, for example for consumer protection or for increased competition or for social market economy | | Marxist Analysis | This example text from a political party manifesto is positive towards Marxist-Leninist ideas or uses specific Marxist terminology | | Middle Class and Professional Groups | This example text from a political party manifesto favourably references the middle class, e.g. white colar groups or the service sector | | Military: Negative | This example text from a political party manifesto is negative towards the military, for example for decreasing military spending or disarmament | | Military: Positive | This example text from a political party manifesto is positive towards the military, for example for military spending or rearmament or military treaty obligations | | Multiculturalism: Negative | This example text from a political party manifesto is sceptical towards multiculturalism, or for cultural integration or appeals to cultural homogeneity in society | | Multiculturalism: Positive | This example text from a political party manifesto favourably mentions cultural diversity, for example for freedom of religion or linguistic heritages | | National Way of Life: Negative | This example text from a political party manifesto unfavourably mentions a country's nation and history, for example sceptical towards patriotism or national pride | | National Way of Life: Positive | This example text from a political party manifesto is positive towards the national way of life and history, for example pride of citizenship or appeals to patriotism | | Nationalisation | This example text from a political party manifesto is positive towards government ownership of industries or land or for economic nationalisation | | Non-economic Demographic Groups | This example text from a political party manifesto favourably mentions non-economic demographic groups like women, students or specific age groups | | Peace | This example text from a political party manifesto is positive towards peace and peaceful means of solving crises, for example in favour of negotiations and ending wars | | Political Authority | This example text from a political party manifesto mentions the speaker's competence to govern or other party's lack of such competence, or favourably mentions a strong/stable government | | Political Corruption | This example text from a political party manifesto is negative towards political corruption or abuse of political/bureaucratic power | | Protectionism: Negative | This example text from a political party manifesto is negative towards protectionism, in favour of free trade | | Protectionism: Positive | This example text from a political party manifesto is in favour of protectionism, for example tariffs, export subsidies | | Technology and Infrastructure: Positive | This example text from a political party manifesto is about technology and infrastructure, e.g. the importance of modernisation of industry, or supportive of public spending on infrastructure/tech | | Traditional Morality: Negative | This example text from a political party manifesto is negative towards traditional morality, for example against religious moral values, for divorce or abortion, for modern families or separation of church and state | | Traditional Morality: Positive | This example text from a political party manifesto is favourable towards traditional or religious values, for example for censorship of immoral behavour, for traditional family values or religious institutions | | Underprivileged Minority Groups | This example text from a political party manifesto favourably mentions underprivileged minorities, for example handicapped, homosexuals or immigrants | | Welfare State Expansion | This example text from a political party manifesto is positive towards the welfare state, e.g. health care, pensions or social housing | | Welfare State Limitation | This example text from a political party manifesto is for limiting the welfare state, for example public funding for social services or social security, e.g. private care before state care |
google/vit-base-patch32-224-in21k
google
"2022-12-08T10:59:40Z"
54,736
17
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "vit", "image-feature-extraction", "vision", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('google/vit-base-patch32-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch32-224-in21k') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_state = outputs.last_hidden_state ``` Refer to the [docs](https://huggingface.co/docs/transformers/model_doc/vit) for usage in TensorFlow and JAX/FLAX. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
tugstugi/bert-base-mongolian-uncased
tugstugi
"2021-05-20T08:13:09Z"
54,603
1
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "mongolian", "uncased", "mn", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: "mn" tags: - bert - mongolian - uncased --- # BERT-BASE-MONGOLIAN-UNCASED [Link to Official Mongolian-BERT repo](https://github.com/tugstugi/mongolian-bert) ## Model description This repository contains pre-trained Mongolian [BERT](https://arxiv.org/abs/1810.04805) models trained by [tugstugi](https://github.com/tugstugi), [enod](https://github.com/enod) and [sharavsambuu](https://github.com/sharavsambuu). Special thanks to [nabar](https://github.com/nabar) who provided 5x TPUs. This repository is based on the following open source projects: [google-research/bert](https://github.com/google-research/bert/), [huggingface/pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) and [yoheikikuta/bert-japanese](https://github.com/yoheikikuta/bert-japanese). #### How to use ```python from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('tugstugi/bert-base-mongolian-uncased', use_fast=False) model = AutoModelForMaskedLM.from_pretrained('tugstugi/bert-base-mongolian-uncased') ## declare task ## pipe = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ## example ## input_ = 'Миний [MASK] хоол идэх нь тун чухал.' output_ = pipe(input_) for i in range(len(output_)): print(output_[i]) ## output ## #{'sequence': 'миний хувьд хоол идэх нь тун чухал.', 'score': 0.7889143824577332, 'token': 126, 'token_str': 'хувьд'} #{'sequence': 'миний бодлоор хоол идэх нь тун чухал.', 'score': 0.18616807460784912, 'token': 6106, 'token_str': 'бодлоор'} #{'sequence': 'миний зүгээс хоол идэх нь тун чухал.', 'score': 0.004825591575354338, 'token': 761, 'token_str': 'зүгээс'} #{'sequence': 'миний биед хоол идэх нь тун чухал.', 'score': 0.0015743684489279985, 'token': 3010, 'token_str': 'биед'} #{'sequence': 'миний тухайд хоол идэх нь тун чухал.', 'score': 0.0014919431414455175, 'token': 1712, 'token_str': 'тухайд'} ``` ## Training data Mongolian Wikipedia and the 700 million word Mongolian news data set [[Pretraining Procedure](https://github.com/tugstugi/mongolian-bert#pre-training)] ### BibTeX entry and citation info ```bibtex @misc{mongolian-bert, author = {Tuguldur, Erdene-Ochir and Gunchinish, Sharavsambuu and Bataa, Enkhbold}, title = {BERT Pretrained Models on Mongolian Datasets}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tugstugi/mongolian-bert/}} } ```
Azurro/APT3-1B-Base
Azurro
"2024-01-04T13:16:04Z"
54,592
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ALLaMo", "pl", "dataset:chrisociepa/wikipedia-pl-20230401", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-04T08:41:29Z"
--- license: cc-by-nc-4.0 datasets: - chrisociepa/wikipedia-pl-20230401 language: - pl library_name: transformers tags: - llama - ALLaMo inference: false --- # APT3-1B-Base ## Introduction At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more. ## Statements Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch. We have made the following statements: * we use 1 consumer graphic card * we train the model only with the Polish corpus * we use manually selected, high quality texts for training the model. Why have we made such statements? It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value). Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM. All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models. It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor-quality data. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285GB of Polish language text corpus that has then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language. ## Model APT3-1B-Base has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train language models similar to the Meta AI’s LLaMA models quickly and efficiently. APT3-1B-Base is an autoregressive language model based on the architecture of a transformer. It has been trained with data collected before the end of December 2023. The training dataset (the Polish corpus) has over 60 billion tokens, and we use all of them for training with one epoch. A special tokenizer has been prepared and trained for the purpose of training the models in the APT3 series. ### Model description: * **Developed by:** [Azurro](https://azurro.pl) * **Language:** Polish * **Model type:** causal decoder-only * **License:** CC BY NC 4.0 (non-commercial use) ### Model details: | **Hyperparameter** | **Value** | |--------------------|-------------| | Model Parameters | 1041M | | Sequence Length | 2048 | | Vocabulary Size | 31980 | | Layers | 18 | | Heads | 32 | | d_head | 64 | | d_model | 2048 | | Dropout | 0.0 | | Bias | No | | Positional Encoding | RoPE | | Activation Function | SwiGLU | | Normalizing Function | RMSNorm | | Intermediate Size | 5504 | | Norm Epsilon | 1e-06 | ### Tokenizer details: * type: BPE * special tokens: 8 (`<unk>`, `<s>`, `</s>`, `<pad>`, `[INST]`, `[/INST]`, `<<SYS>>`, `<</SYS>>`) * alphabet size: 113 * vocabulary size: 31980 ## Training * Framework: [ALLaMo](https://github.com/chrisociepa/allamo) * Visualizations: [W&B](https://wandb.ai) <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-train.jpg"> </p> <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-eval.jpg"> </p> ### Training hyperparameters: | **Hyperparameter** | **Value** | |-----------------------------|------------------| | Micro Batch Size | 1 | | Gradient Accumulation Steps | 1024 | | Batch Size | 2097152 | | Learning Rate (cosine) | 2e-04 -> 2e-05 | | Warmup Iterations | 1000 | | All Iterations | 28900 | | Optimizer | AdamW | | β1, β2 | 0.9, 0.95 | | Adam_eps | 1e−8 | | Weight Decay | 0.1 | | Grad Clip | 1.0 | | Precision | bfloat16 | ### Dataset Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently. Our close collaboration with the Speakleash team has resulted in the creation of over 285GB of the Polish language text corpus. The process of preparing the training dataset involved transforming documents by applying various cleaning and repairing rules, followed by selecting documents of appropriate quality. Our training dataset contains: * 150 datasets from [Speakleash](https://speakleash.org) - 93% * other publicly available and crawled web data - 6% * Polish Wikipedia - 1% ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Azurro/APT3-1B-Base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use Hugging Face Pipelines to generate text: ```python import transformers text = "Najważniejszym celem człowieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami. ## Limitations and Biases APT3-1B-Base is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. APT3-1B-Base can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-1B-Base was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## License Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. ## Citation Please cite this model using the following format: ``` @online{AzurroAPT3Base1B, author = {Krzysztof Ociepa, Azurro}, title = {Introducing APT3-1B-Base: Polish Language Model}, year = {2024}, url = {www.azurro.pl/apt3-1b-base-en}, note = {Accessed: 2024-01-04}, % change this date urldate = {2024-01-04} % change this date } ``` ## Special thanks We would like to especially thank the [Speakleash](https://speakleash.org) team for collecting and sharing texts in Polish, and for the support we could always count on while preparing the training set for our model. Without you, it would not have been possible to train this model. Thank you! ## The Azurro Team Please find more information on the Azurro [homepage](https://azurro.pl). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@azurro.pl](mailto:contact@azurro.pl).
mradermacher/AceGPT-v2-32B-Chat-i1-GGUF
mradermacher
"2024-06-24T12:57:04Z"
54,571
0
transformers
[ "transformers", "gguf", "ar", "zh", "en", "base_model:FreedomIntelligence/AceGPT-v2-32B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T19:32:00Z"
--- base_model: FreedomIntelligence/AceGPT-v2-32B-Chat language: - ar - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/FreedomIntelligence/AceGPT-v2-32B-Chat <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | | | [GGUF](https://huggingface.co/mradermacher/AceGPT-v2-32B-Chat-i1-GGUF/resolve/main/AceGPT-v2-32B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
Systran/faster-whisper-small
Systran
"2023-11-23T10:57:09Z"
54,563
6
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "license:mit", "region:us" ]
automatic-speech-recognition
"2023-11-23T09:53:51Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition license: mit library_name: ctranslate2 --- # Whisper small model for CTranslate2 This repository contains the conversion of [openai/whisper-small](https://huggingface.co/openai/whisper-small) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("small") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model openai/whisper-small --output_dir faster-whisper-small \ --copy_files tokenizer.json --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small).**
d4data/biomedical-ner-all
d4data
"2023-07-02T07:28:28Z"
54,464
149
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "Token Classification", "en", "license:apache-2.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-06-19T14:04:18Z"
--- license: apache-2.0 language: - en tags: - Token Classification co2_eq_emissions: 0.0279399890043426 widget: - text: "CASE: A 28-year-old previously healthy man presented with a 6-week history of palpitations. The symptoms occurred during rest, 2–3 times per week, lasted up to 30 minutes at a time and were associated with dyspnea. Except for a grade 2/6 holosystolic tricuspid regurgitation murmur (best heard at the left sternal border with inspiratory accentuation), physical examination yielded unremarkable findings." example_title: "example 1" - text: "A 63-year-old woman with no known cardiac history presented with a sudden onset of dyspnea requiring intubation and ventilatory support out of hospital. She denied preceding symptoms of chest discomfort, palpitations, syncope or infection. The patient was afebrile and normotensive, with a sinus tachycardia of 140 beats/min." example_title: "example 2" - text: "A 48 year-old female presented with vaginal bleeding and abnormal Pap smears. Upon diagnosis of invasive non-keratinizing SCC of the cervix, she underwent a radical hysterectomy with salpingo-oophorectomy which demonstrated positive spread to the pelvic lymph nodes and the parametrium. Pathological examination revealed that the tumour also extensively involved the lower uterine segment." example_title: "example 3" --- ## About the Model An English Named Entity Recognition model, trained on Maccrobat to recognize the bio-medical entities (107 entities) from a given text corpus (case reports etc.). This model was built on top of distilbert-base-uncased - Dataset: Maccrobat https://figshare.com/articles/dataset/MACCROBAT2018/9764942 - Carbon emission: 0.0279399890043426 Kg - Training time: 30.16527 minutes - GPU used : 1 x GeForce RTX 3060 Laptop GPU Checkout the tutorial video for explanation of this model and corresponding python library: https://youtu.be/xpiDPdBpS18 ## Usage The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library. ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all") pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # pass device=0 if using gpu pipe("""The patient reported no recurrence of palpitations at follow-up 6 months after the ablation.""") ``` ## Author This model is part of the Research topic "AI in Biomedical field" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > https://github.com/dreji18/Bio-Epidemiology-NER ## You can support me here :) <a href="https://www.buymeacoffee.com/deepakjohnreji" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
google/vit-large-patch32-384
google
"2022-01-28T10:24:24Z"
54,148
10
transformers
[ "transformers", "pytorch", "tf", "jax", "vit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384. Images are presented to the model as a sequence of fixed-size patches (resolution 32x32), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384') model = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224 during pre-training, 384x384 during fine-tuning) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
primeline/whisper-large-v3-german
primeline
"2024-06-06T17:43:47Z"
54,054
56
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "de", "doi:10.57967/hf/2239", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2023-11-08T12:05:44Z"
--- license: apache-2.0 language: - de library_name: transformers pipeline_tag: automatic-speech-recognition model-index: - name: whisper-large-v3-german by Florian Zimmermeister @primeLine results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: name: Common Voice de type: common_voice_15 args: de metrics: - type: wer value: 3.002 % name: Test WER - type: cer value: 0.81 % name: Test CER --- ### Summary This model map provides information about a model based on Whisper Large v3 that has been fine-tuned for speech recognition in German. Whisper is a powerful speech recognition platform developed by OpenAI. This model has been specially optimized for processing and recognizing German speech. ### Applications This model can be used in various application areas, including - Transcription of spoken German language - Voice commands and voice control - Automatic subtitling for German videos - Voice-based search queries in German - Dictation functions in word processing programs ## Model family | Model | Parameters | link | |----------------------------------|------------|--------------------------------------------------------------| | Whisper large v3 german | 1.54B | [link](https://huggingface.co/primeline/whisper-large-v3-german) | | Distil-whisper large v3 german | 756M | [link](https://huggingface.co/primeline/distil-whisper-large-v3-german) | | tiny whisper | 37.8M | [link](https://huggingface.co/primeline/whisper-tiny-german) | ### Training data The training data for this model includes a large amount of spoken German from various sources. The data was carefully selected and processed to optimize recognition performance. ### Training process The training of the model was performed with the following hyperparameters - Batch size: 1024 - Epochs: 2 - Learning rate: 1e-5 - Data augmentation: No ### How to use ```python import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline from datasets import load_dataset device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_id = "primeline/whisper-large-v3-german" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=30, batch_size=16, return_timestamps=True, torch_dtype=torch_dtype, device=device, ) dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") sample = dataset[0]["audio"] result = pipe(sample) print(result["text"]) ``` ## [About us](https://primeline-ai.com/en/) [![primeline AI](https://primeline-ai.com/wp-content/uploads/2024/02/pl_ai_bildwortmarke_original.svg)](https://primeline-ai.com/en/) Your partner for AI infrastructure in Germany <br> Experience the powerful AI infrastructure that drives your ambitions in Deep Learning, Machine Learning & High-Performance Computing. Optimized for AI training and inference. Model author: [Florian Zimmermeister](https://huggingface.co/flozi00)
QCRI/PropagandaTechniquesAnalysis-en-BERT
QCRI
"2021-05-19T11:27:07Z"
53,944
4
transformers
[ "transformers", "pytorch", "bert", "propaganda", "en", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:04Z"
--- language: "en" thumbnail: "https://pbs.twimg.com/profile_images/1092721745994440704/d6R-AHzj_400x400.jpg" tags: - propaganda - bert license: "MIT" datasets: - metrics: - --- Propaganda Techniques Analysis BERT ---- This model is a BERT based model to make predictions of propaganda techniques in news articles in English. The model is described in [this paper](https://propaganda.qcri.org/papers/EMNLP_2019__Fine_Grained_Propaganda_Detection.pdf). ## Model description Please find propaganda definition here: https://propaganda.qcri.org/annotations/definitions.html You can also try the model in action here: https://www.tanbih.org/prta ### How to use ```python >>> from transformers import BertTokenizerFast >>> from .model import BertForTokenAndSequenceJointClassification >>> >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') >>> model = BertForTokenAndSequenceJointClassification.from_pretrained( >>> "QCRI/PropagandaTechniquesAnalysis-en-BERT", >>> revision="v0.1.0", >>> ) >>> >>> inputs = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> sequence_class_index = torch.argmax(outputs.sequence_logits, dim=-1) >>> sequence_class = model.sequence_tags[sequence_class_index[0]] >>> token_class_index = torch.argmax(outputs.token_logits, dim=-1) >>> tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1]) >>> tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]] ``` ### BibTeX entry and citation info ```bibtex @inproceedings{da-san-martino-etal-2019-fine, title = "Fine-Grained Analysis of Propaganda in News Article", author = "Da San Martino, Giovanni and Yu, Seunghak and Barr{\'o}n-Cede{\~n}o, Alberto and Petrov, Rostislav and Nakov, Preslav", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1565", doi = "10.18653/v1/D19-1565", pages = "5636--5646", abstract = "Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.", } ```
facebook/dino-vits8
facebook
"2024-02-29T10:25:19Z"
53,893
11
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-feature-extraction", "dino", "vision", "dataset:imagenet-1k", "arxiv:2104.14294", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-feature-extraction
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - dino - vision datasets: - imagenet-1k --- # Vision Transformer (small-sized model, patch size 8) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('facebook/dino-vits8') model = ViTModel.from_pretrained('facebook/dino-vits8') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
stablediffusionapi/mklan-xxx-nsfw-pony
stablediffusionapi
"2024-05-28T22:36:09Z"
53,880
3
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-28T22:32:59Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # mklan-xxx-nsfw-pony API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/1502498641716935432.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "mklan-xxx-nsfw-pony" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com) Try model for free: [Generate Images](https://modelslab.com/models/mklan-xxx-nsfw-pony) Model link: [View model](https://modelslab.com/models/mklan-xxx-nsfw-pony) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "mklan-xxx-nsfw-pony", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
transformersbook/pegasus-samsum
transformersbook
"2022-02-05T17:05:28Z"
53,820
6
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum-test This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb). It achieves the following results on the evaluation set: - Loss: 1.4875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7012 | 0.54 | 500 | 1.4875 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
CAMeL-Lab
"2021-10-17T11:15:54Z"
53,782
38
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:04Z"
--- language: - ar license: apache-2.0 widget: - text: "أنا بخير" --- # CAMeLBERT-DA SA Model ## Model description **CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)." * Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline. #### How to use To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component: ```python >>> from camel_tools.sentiment import SentimentAnalyzer >>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment") >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa.predict(sentences) >>> ['positive', 'negative'] ``` You can also use the SA model directly with a transformers pipeline: ```python >>> from transformers import pipeline >>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment') >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa(sentences) [{'label': 'positive', 'score': 0.9616648554801941}, {'label': 'negative', 'score': 0.9779177904129028}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
digiplay/insaneRealistic_v1
digiplay
"2023-12-01T23:55:52Z"
53,727
12
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-14T02:28:38Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/108585/insane-realistic-v10 ![Screenshot_20230714_103126_Vivaldi Browser Snapshot.jpg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/Qfe53r50Aey8Vw7RGd0cM.jpeg) ![Screenshot_20230714_103352_Vivaldi Browser Snapshot.jpg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/FQ5EpuiveNxfaQ2YlRzVB.jpeg)
bczhou/tiny-llava-v1-hf
bczhou
"2024-03-25T13:07:38Z"
53,699
45
transformers
[ "transformers", "safetensors", "llava", "pretraining", "vision-language", "llm", "lmm", "en", "zh", "dataset:Lin-Chen/ShareGPT4V", "dataset:liuhaotian/LLaVA-Pretrain", "dataset:liuhaotian/LLaVA-Instruct-150K", "arxiv:2402.14289", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-01-11T14:41:33Z"
--- license: apache-2.0 datasets: - Lin-Chen/ShareGPT4V - liuhaotian/LLaVA-Pretrain - liuhaotian/LLaVA-Instruct-150K language: - en - zh tags: - llava - vision-language - llm - lmm --- <h2 align="center"> <a href="https://arxiv.org/abs/2402.14289">TinyLLaVA: A Framework of Small-scale Large Multimodal Models</a> <h5 align="center"> [![github](https://img.shields.io/badge/GitHub-TinyLLaVA-blue)](https://github.com/DLCV-BUAA/TinyLLaVABench) [![arXiv](https://img.shields.io/badge/Arxiv-2402.14289-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.14289) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) ## &#x1F389; News * **[2024.03.10]** base recipe out! * **[2024.03.10]** Finetune scripts out! * **[2024.02.25]** Update evaluation scripts and docs! * **[2024.02.25]** Data descriptions out. Release TinyLLaVA-1.5B and TinyLLaVA-2.0B! * **[2024.02.24]** Example code on inference and model loading added! * **[2024.02.23]** Evaluation code and scripts released! * **[2024.02.21]** Creating the [TinyLLaVABench](https://github.com/DLCV-BUAA/TinyLLavaBench) repository on GitHub! * **[2024.02.21]** Our paper: [TinyLLaVA: A Framework of Small-scale Large Multimodal Models](https://arxiv.org/abs/2402.14289) is out! * **[2024.01.11]** Our fist model [TinyLLaVA-1.4B](https://huggingface.co/bczhou/tiny-llava-v1-hf) is out! ## &#x231B; TODO - [ ] Add support for Ollama and llama.cpp. - [x] Developers' guide / How to build demo locally. - [x] Training and custom finetuning docs. - [x] Model Zoo descriptions. - [x] Examples and inference. - [x] Release code for training. - [x] Add descriptions for evaluation. - [x] Add descriptions for data preparation. - [x] Release TinyLLaVA-1.5B and TinyLLaVA-2.0B. - [x] Release TinyLLaVA-3.1B. - [x] Release the evaluation code and weights today(2024.2.23). ### &#x1F525; High performance, but with fewer parameters - Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL. ## Contents - [Install](#x1f527-requirements-and-installation) - [Model Zoo](#x1f433-model-zoo) - [Demo](#Demo) - [Quick Start](#x1f527-quick-start) - [Run Inference](#x1f527-run-inference) - [Evaluation](#evaluation) - [Data](#data-preparation) - [Train](#train) - [Custom Finetune](#custom-finetune) ## &#x1F527; Requirements and Installation We recommend the requirements as follows. 1. Clone this repository and navigate to LLaVA folder ```bash git clone https://github.com/DLCV-BUAA/TinyLLaVABench.git cd TinyLLaVABench ``` 2. Install Package ```Shell conda create -n tinyllava python=3.10 -y conda activate tinyllava pip install --upgrade pip # enable PEP 660 support pip install -e . ``` 3. Install additional packages for training cases ```Shell pip install -e ".[train]" pip install flash-attn --no-build-isolation ``` ### Upgrade to the latest code base ```Shell git pull pip install -e . # if you see some import errors when you upgrade, please try running the command below (without #) # pip install flash-attn --no-build-isolation --no-cache-dir ``` ## &#x1F433; Model Zoo ### Legacy Model - [tiny-llava-hf](https://huggingface.co/bczhou/tiny-llava-v1-hf) ### Pretrained Models - [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) - [TinyLLaVA-2.0B](https://huggingface.co/bczhou/TinyLLaVA-2.0B) - [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) ### Model Details | Name | LLM | Checkpoint | LLaVA-Bench-Wild | MME | MMBench | MM-Vet | SQA-image | VQA-v2 | GQA | TextVQA | |---------------|-------------------|------------------------------------------------|------------------|----------|---------|--------|-----------|--------|-------|---------| | TinyLLaVA-3.1B | Phi-2 | [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 75.8 | 1464.9 | 66.9 | 32.0 | 69.1 | 79.9 | 62.0 | 59.1 | | TinyLLaVA-2.0B | StableLM-2-1.6B | [TinyLLaVA-2.0B](https://huggingface.co/bczhou/TinyLLaVA-2.0B) | 66.4 | 1433.8 | 63.3 | 32.6 | 64.7 | 78.9 | 61.9 | 56.4 | | TinyLLaVA-1.5B | TinyLlama | [TinyLLaVA-1.5B](https://huggingface.co/bczhou/TinyLLaVA-1.5B) | 60.8 | 1276.5 | 55.2 | 25.8 | 60.3 | 76.9 | 60.3 | 51.7 | ## Demo ### Gradio Web Demo Launch a local web demo by running: ```shell python tinyllava/serve/app.py --model-path bczhou/TinyLLaVA-3.1B --model-name TinyLLaVA-3.1B ``` ### CLI Inference We also support running inference with CLI. To use our model, run: ```shell python -m tinyllava.serve.cli \ --model-path bczhou/TinyLLaVA-3.1B \ --image-file "./tinyllava/serve/examples/extreme_ironing.jpg" ``` ## &#x1F527; Quick Start <details> <summary>Load model</summary> ```Python from tinyllava.model.builder import load_pretrained_model from tinyllava.mm_utils import get_model_name_from_path from tinyllava.eval.run_tiny_llava import eval_model model_path = "bczhou/TinyLLaVA-3.1B" tokenizer, model, image_processor, context_len = load_pretrained_model( model_path=model_path, model_base=None, model_name=get_model_name_from_path(model_path) ) ``` </details> ## &#x1F527; Run Inference Here's an example of running inference with [TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) <details> <summary>Run Inference</summary> ```Python from tinyllava.model.builder import load_pretrained_model from tinyllava.mm_utils import get_model_name_from_path from tinyllava.eval.run_tiny_llava import eval_model model_path = "bczhou/TinyLLaVA-3.1B" prompt = "What are the things I should be cautious about when I visit here?" image_file = "https://llava-vl.github.io/static/images/view.jpg" args = type('Args', (), { "model_path": model_path, "model_base": None, "model_name": get_model_name_from_path(model_path), "query": prompt, "conv_mode": "phi", "image_file": image_file, "sep": ",", "temperature": 0, "top_p": None, "num_beams": 1, "max_new_tokens": 512 })() eval_model(args) ``` </details> ### Important We use different `conv_mode` for different models. Replace the `conv_mode` in `args` according to this table: | model | conv_mode | |---------------- |----------- | | TinyLLaVA-3.1B | phi | | TinyLLaVA-2.0B | phi | | TinyLLaVA-1.5B | v1 | ## Evaluation To ensure the reproducibility, we evaluate the models with greedy decoding. See [Evaluation.md](https://github.com/DLCV-BUAA/TinyLLaVABench/blob/main/docs/Evaluation.md) ## Data Preparation In our paper, we used two different datasets: the [LLaVA dataset](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#pretrain-feature-alignment) and the [ShareGPT4V dataset](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md), and compared their differences. In this section, we provide information on data preparation. ### Pretraining Images * LLaVA: The pretraining images of LLaVA is from the 558K subset of the LAION-CC-SBU dataset. * ShareGPT4V: The pretraining images of ShareGPT4V is a mixture of 558K LAION-CC-SBU subset, SAM dataset, and COCO dataset. ### Pretraining Annotations * LLaVA: The pretraining annotations of LLaVA are [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain). * ShareGPT4V: The pretraining annotations of ShareGPT4V are [here](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/share-captioner_coco_lcs_sam_1246k_1107.json). ### SFT Images & Annotations The majority of the two SFT datasets are the same, with the exception that the 23K detailed description data in LLaVA-1.5-SFT being replaced with detailed captions randomly sampled from the [100K ShareGPT4V data](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/sharegpt4v_instruct_gpt4-vision_cap100k.json). ### Download data 1. Download relevant images - LAION-CC-SBU-558K: [images.zip](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/images.zip) - COCO: This dataset is from the [COCO2017 challenge](https://cocodataset.org/). Download: [train2017](http://images.cocodataset.org/zips/train2017.zip) - WebData: This dataset is curated by the [ShareGPT4V project](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V). Download: [images](https://drive.google.com/drive/folders/1tCUQ-sq6vdshZVkF0ZeF3K4eztkXJgax?usp=sharing). Only for academic usage. - SAM: This dataset is collected by [Meta](https://ai.meta.com/datasets/segment-anything-downloads/). Download: [images](https://ai.meta.com/datasets/segment-anything-downloads/). We only use 000000~000050.tar for now. If you just want to use ShareGPT4V for SFT, you can quickly download 9K images from [here](https://drive.google.com/file/d/1dKumdOKSXtV7lIXdrG7jsIK_z2vZv2gs/view?usp=drive_link). - GQA: [GQA project page](https://cs.stanford.edu/people/dorarad/gqa/about.html). Download: [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip) - OCR-VQA: [OCR-VQA project page](https://ocr-vqa.github.io/). Download: [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing). We save all files as `.jpg` - TextVQA: [TextVQA project page](https://textvqa.org/). Download: [trainvalimages](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip) - VisualGenome: [VisualGenome project page](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html). Download: [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip) 2. Download relevant annotations - LLaVA's pretraining annotations: [blip_laion_cc_sbu_558k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - LLaVA's SFT annotations: [llava_v1_5_mix665k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) - ShareGPT4V's pretraining annotations: [share-captioner_coco_lcs_sam_1246k_1107.json](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/share-captioner_coco_lcs_sam_1246k_1107.json) - ShareGPT4V's SFT annotations: [sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json) ### Organize Data Organize the image files and annotation files as follows in `path/to/your/data`: ```none data ├── llava │ ├── llava_pretrain │ │ ├── images │ │ ├── blip_laion_cc_sbu_558k.json ├── coco │ ├── train2017 ├── sam │ ├── images ├── gqa │ ├── images ├── ocr_vqa │ ├── images ├── textvqa │ ├── train_images ├── vg │ ├── VG_100K │ ├── VG_100K_2 ├── share_textvqa │ ├── images ├── web-celebrity │ ├── images ├── web-landmark │ ├── images ├── wikiart │ ├── images ├── text_files │ ├── llava_v1_5_mix665k.json │ ├── share-captioner_coco_lcs_sam_1246k_1107.json │ ├── sharegpt4v_mix665k_cap23k_coco-ap9k_lcs3k_sam9k_div2k.json ``` ## Train **This section we describe the base recipe.** ### Hyperparameters Both hyperparameters used in pretraining and finetuning are provided below. 1. Pretraining | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | |----------------| ---: | ---: | ---: |-----------:| ---: | | TinyLLaVA-3.1B | 256 | 1e-3 | 1 | 3072 | 0 | 2. Finetuning | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | |----------------| ---: | ---: | ---: |-----------:| ---: | | TinyLLaVA-3.1B | 128 | 2e-5 | 1 | 3072 | 0 | ### Pretrain **Replace paths to your paths** Training script with DeepSpeed ZeRO-2: [`pretrain.sh`](https://github.com/DLCV-BUAA/TinyLLaVABench/blob/main/scripts/tiny_llava/pretrain.sh). ### Finetune **Replace paths to your paths** Training script with DeepSpeed ZeRO-3: [`finetune.sh`](https://github.com/DLCV-BUAA/TinyLLaVABench/blob/main/scripts/tiny_llava/finetune.sh). ## Custom-Finetune Check out our custom finetune using LoRA [here](https://github.com/DLCV-BUAA/TinyLLaVABench/blob/dev/docs/CUTOM_FINETUNE.md). #### - Prompt Template The model supports multi-image and multi-prompt generation. When using the model, make sure to follow the correct prompt template (`USER: <image>xxx\nASSISTANT:`), where `<image>` token is a place-holding special token for image embeddings. ## Model Inference from `pipeline` and `transformers` #### - Using `pipeline`: Below we used [`"bczhou/tiny-llava-v1-hf"`](https://huggingface.co/bczhou/tiny-llava-v1-hf) checkpoint. ```python from transformers import pipeline from PIL import Image import requests model_id = "bczhou/tiny-llava-v1-hf" pipe = pipeline("image-to-text", model=model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:" outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) print(outputs[0]) >>> {"generated_text': 'USER: \nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: The label 15 represents lava, which is a type of volcanic rock."} ``` #### - Using pure `transformers`: Below is an example script to run generation in `float16` precision on a GPU device: ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "bczhou/tiny-llava-v1-hf" prompt = "USER: <image>\nWhat are these?\nASSISTANT:" image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = AutoProcessor.from_pretrained(model_id) raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ## &#x270F; Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhou2024tinyllava, title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models}, author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang}, year={2024}, eprint={2402.14289}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` ## ❤️ Community efforts * Our codebase is built upon the [LLaVA](https://github.com/haotian-liu/LLaVA) project. Great work! * Our project uses data from the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V) project. Great work!
llava-hf/llava-v1.6-vicuna-13b-hf
llava-hf
"2024-06-28T12:20:17Z"
53,565
10
transformers
[ "transformers", "safetensors", "llava_next", "pretraining", "vision", "image-text-to-text", "arxiv:2310.03744", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-03-17T21:31:50Z"
--- tags: - vision - image-text-to-text --- # LLaVa-Next, leveraging [liuhaotian/llava-v1.6-vicuna-13b](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-13b) as LLM The LLaVA-NeXT model was proposed in [LLaVA-NeXT: Improved reasoning, OCR, and world knowledge](https://llava-vl.github.io/blog/2024-01-30-llava-next/) by Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, Yong Jae Lee. LLaVa-NeXT (also called LLaVa-1.6) improves upon [LLaVa-1.5](https://huggingface.co/transformers/main/model_doc/llava.html) by increasing the input image resolution and training on an improved visual instruction tuning dataset to improve OCR and common sense reasoning. Disclaimer: The team releasing LLaVa-NeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LLaVa combines a pre-trained large language model with a pre-trained vision encoder for multimodal chatbot use cases. LLaVA 1.6 improves on LLaVA 1.5 BY: - More diverse and high quality data mixture - Dynamic high resolution ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png) ## Intended uses & limitations You can use the raw model for tasks like image captioning, visual question answering, multimodal chatbot use cases. See the [model hub](https://huggingface.co/models?search=llava-hf) to look for other versions on a task that interests you. ### How to use Here's the prompt template for this model: ``` "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:" ``` You can load and use the model like following: ```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch from PIL import Image import requests processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-vicuna-13b-hf") model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-vicuna-13b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True) model.to("cuda:0") # prepare image and text prompt, using the appropriate prompt template url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true" image = Image.open(requests.get(url, stream=True).raw) prompt = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:" inputs = processor(prompt, image, return_tensors="pt").to("cuda:0") # autoregressively complete prompt output = model.generate(**inputs, max_new_tokens=100) print(processor.decode(output[0], skip_special_tokens=True)) ``` ### Model optimization #### 4-bit quantization through `bitsandbytes` library First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```diff model = LlavaNextForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + load_in_4bit=True ) ``` #### Use Flash-Attention 2 to further speed-up generation First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```diff model = LlavaNextForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + use_flash_attention_2=True ).to(0) ``` ### BibTeX entry and citation info ```bibtex @misc{liu2023improved, title={Improved Baselines with Visual Instruction Tuning}, author={Haotian Liu and Chunyuan Li and Yuheng Li and Yong Jae Lee}, year={2023}, eprint={2310.03744}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
stabilityai/stable-cascade
stabilityai
"2024-03-16T15:06:00Z"
53,558
1,225
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:other", "diffusers:StableCascadeDecoderPipeline", "region:us" ]
text-to-image
"2024-02-06T17:58:47Z"
--- pipeline_tag: text-to-image license: other license_name: stable-cascade-nc-community license_link: LICENSE prior: - stabilityai/stable-cascade-prior --- # Stable Cascade <!-- Provide a quick summary of what the model is/does. --> <img src="figures/collage_1.jpg" width="800"> This model is built upon the [Würstchen](https://openreview.net/forum?id=gU58d5QeGv) architecture and its main difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this important? The smaller the latent space, the **faster** you can run inference and the **cheaper** the training becomes. How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a 1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable Diffusion 1.5. <br> <br> Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well. ## Model Details ### Model Description Stable Cascade is a diffusion model trained to generate images given a text prompt. - **Developed by:** Stability AI - **Funded by:** Stability AI - **Model type:** Generative text-to-image model ### Model Sources For research purposes, we recommend our `StableCascade` Github repository (https://github.com/Stability-AI/StableCascade). - **Repository:** https://github.com/Stability-AI/StableCascade - **Paper:** https://openreview.net/forum?id=gU58d5QeGv ### Model Overview Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images, hence the name "Stable Cascade". Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion. However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible for generating the small 24 x 24 latents given a text prompt. The following picture shows this visually. <img src="figures/model-overview.jpg" width="600"> For this release, we are providing two checkpoints for Stage C, two for Stage B and one for Stage A. Stage C comes with a 1 billion and 3.6 billion parameter version, but we highly recommend using the 3.6 billion version, as most work was put into its finetuning. The two versions for Stage B amount to 700 million and 1.5 billion parameters. Both achieve great results, however the 1.5 billion excels at reconstructing small and fine details. Therefore, you will achieve the best results if you use the larger variant of each. Lastly, Stage A contains 20 million parameters and is fixed due to its small size. ## Evaluation <img height="300" src="figures/comparison.png"/> According to our evaluation, Stable Cascade performs best in both prompt alignment and aesthetic quality in almost all comparisons. The above picture shows the results from a human evaluation using a mix of parti-prompts (link) and aesthetic prompts. Specifically, Stable Cascade (30 inference steps) was compared against Playground v2 (50 inference steps), SDXL (50 inference steps), SDXL Turbo (1 inference step) and Würstchen v2 (30 inference steps). ## Code Example **Note:** In order to use the `torch.bfloat16` data type with the `StableCascadeDecoderPipeline` you need to have PyTorch 2.2.0 or higher installed. This also means that using the `StableCascadeCombinedPipeline` with `torch.bfloat16` requires PyTorch 2.2.0 or higher, since it calls the StableCascadeDecoderPipeline internally. If it is not possible to install PyTorch 2.2.0 or higher in your environment, the `StableCascadeDecoderPipeline` can be used on its own with the torch.float16 data type. You can download the full precision or bf16 variant weights for the pipeline and cast the weights to torch.float16. ```shell pip install diffusers ``` ```python import torch from diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline prompt = "an image of a shiba inu, donning a spacesuit and helmet" negative_prompt = "" prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", variant="bf16", torch_dtype=torch.bfloat16) decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", variant="bf16", torch_dtype=torch.float16) prior.enable_model_cpu_offload() prior_output = prior( prompt=prompt, height=1024, width=1024, negative_prompt=negative_prompt, guidance_scale=4.0, num_images_per_prompt=1, num_inference_steps=20 ) decoder.enable_model_cpu_offload() decoder_output = decoder( image_embeddings=prior_output.image_embeddings.to(torch.float16), prompt=prompt, negative_prompt=negative_prompt, guidance_scale=0.0, output_type="pil", num_inference_steps=10 ).images[0] decoder_output.save("cascade.png") ``` ### Using the Lite Version of the Stage B and Stage C models ```python import torch from diffusers import ( StableCascadeDecoderPipeline, StableCascadePriorPipeline, StableCascadeUNet, ) prompt = "an image of a shiba inu, donning a spacesuit and helmet" negative_prompt = "" prior_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade-prior", subfolder="prior_lite") decoder_unet = StableCascadeUNet.from_pretrained("stabilityai/stable-cascade", subfolder="decoder_lite") prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", prior=prior_unet) decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", decoder=decoder_unet) prior.enable_model_cpu_offload() prior_output = prior( prompt=prompt, height=1024, width=1024, negative_prompt=negative_prompt, guidance_scale=4.0, num_images_per_prompt=1, num_inference_steps=20 ) decoder.enable_model_cpu_offload() decoder_output = decoder( image_embeddings=prior_output.image_embeddings, prompt=prompt, negative_prompt=negative_prompt, guidance_scale=0.0, output_type="pil", num_inference_steps=10 ).images[0] decoder_output.save("cascade.png") ``` ### Loading original checkpoints with `from_single_file` Loading the original format checkpoints is supported via `from_single_file` method in the StableCascadeUNet. ```python import torch from diffusers import ( StableCascadeDecoderPipeline, StableCascadePriorPipeline, StableCascadeUNet, ) prompt = "an image of a shiba inu, donning a spacesuit and helmet" negative_prompt = "" prior_unet = StableCascadeUNet.from_single_file( "https://huggingface.co/stabilityai/stable-cascade/resolve/main/stage_c_bf16.safetensors", torch_dtype=torch.bfloat16 ) decoder_unet = StableCascadeUNet.from_single_file( "https://huggingface.co/stabilityai/stable-cascade/blob/main/stage_b_bf16.safetensors", torch_dtype=torch.bfloat16 ) prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", prior=prior_unet, torch_dtype=torch.bfloat16) decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", decoder=decoder_unet, torch_dtype=torch.bfloat16) prior.enable_model_cpu_offload() prior_output = prior( prompt=prompt, height=1024, width=1024, negative_prompt=negative_prompt, guidance_scale=4.0, num_images_per_prompt=1, num_inference_steps=20 ) decoder.enable_model_cpu_offload() decoder_output = decoder( image_embeddings=prior_output.image_embeddings, prompt=prompt, negative_prompt=negative_prompt, guidance_scale=0.0, output_type="pil", num_inference_steps=10 ).images[0] decoder_output.save("cascade-single-file.png") ``` ### Using the `StableCascadeCombinedPipeline` ```python from diffusers import StableCascadeCombinedPipeline pipe = StableCascadeCombinedPipeline.from_pretrained("stabilityai/stable-cascade", variant="bf16", torch_dtype=torch.bfloat16) prompt = "an image of a shiba inu, donning a spacesuit and helmet" pipe( prompt=prompt, negative_prompt="", num_inference_steps=10, prior_num_inference_steps=20, prior_guidance_scale=3.0, width=1024, height=1024, ).images[0].save("cascade-combined.png") ``` ## Uses ### Direct Use The model is intended for research purposes for now. Possible research areas and tasks include - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy). ## Limitations and Bias ### Limitations - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for research purposes only. ## How to Get Started with the Model Check out https://github.com/Stability-AI/StableCascade
artificialguybr/3DRedmond-V1
artificialguybr
"2023-11-08T23:03:44Z"
53,282
27
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2023-11-08T23:03:14Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: 3D Render Style, 3DRenderAF, Portrait of a humanoide fox output: url: images/00071-781813612.png - text: 3D Render Style, 3DRenderAF, Portrait of a beautiful woman, stunning parameters: negative_prompt: bad art, ugly, deformed, watermark, text output: url: images/00070-1880960891.png - text: 3D Render Style, 3DRenderAF, Cute panda on top of a table, kitchen, funny parameters: negative_prompt: bad art, ugly, deformed, watermark, text output: url: images/00064-2498532539.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: 3D Render Style, 3DRenderAF --- # 3DRedmond <Gallery /> ## Model description 3D.Redmond is here! Introducing 3D.Redmond, the ultimate LORA for creating stunning 3D Render Style Images! I&#39;m grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI. Test all my Loras here for free and unlimited. Thanks, HF, for Inference API! It is based on SD XL 1.0 and fine-tuned on a large dataset. The LORA has a high capacity to generate 3D Render Style Images. The tag for the model:3D Render Style, 3DRenderAF I really hope you like the LORA and use it. If you like the model and think it&#39;s worth it, you can make a donation to my Patreon or Ko-fi. Patreon: https:&#x2F;&#x2F;www.patreon.com&#x2F;user?u&#x3D;81570187 Ko-fi:https:&#x2F;&#x2F;ko-fi.com&#x2F;artificialguybr BuyMeACoffe:https:&#x2F;&#x2F;www.buymeacoffee.com&#x2F;jvkape Follow me in my twitter to know before all about new models: https:&#x2F;&#x2F;twitter.com&#x2F;artificialguybr&#x2F; ## Trigger words You should use `3D Render Style` to trigger the image generation. You should use `3DRenderAF` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/artificialguybr/3DRedmond-V1/tree/main) them in the Files & versions tab.
Dongjin-kr/ko-reranker
Dongjin-kr
"2024-05-08T03:50:35Z"
53,246
34
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "ko", "en", "arxiv:2307.03172", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-12-22T07:50:36Z"
--- license: mit language: - ko - en pipeline_tag: text-classification --- # Korean Reranker Training on Amazon SageMaker ### **한국어 Reranker** 개발을 위한 파인튜닝 가이드를 제시합니다. ko-reranker는 [BAAI/bge-reranker-larger](https://huggingface.co/BAAI/bge-reranker-large) 기반 한국어 데이터에 대한 fine-tuned model 입니다. <br> 보다 자세한 사항은 [korean-reranker-git](https://github.com/aws-samples/aws-ai-ml-workshop-kr/tree/master/genai/aws-gen-ai-kr/30_fine_tune/reranker-kr) / [AWS Blog, 한국어 Reranker를 활용한 검색 증강 생성(RAG) 성능 올리기](https://aws.amazon.com/ko/blogs/tech/korean-reranker-rag/)을 참고하세요 - - - ## 0. Features - #### <span style="#FF69B4;"> Reranker는 임베딩 모델과 달리 질문과 문서를 입력으로 사용하며 임베딩 대신 유사도를 직접 출력합니다.</span> - #### <span style="#FF69B4;"> Reranker에 질문과 구절을 입력하면 연관성 점수를 얻을 수 있습니다.</span> - #### <span style="#FF69B4;"> Reranker는 CrossEntropy loss를 기반으로 최적화되므로 관련성 점수가 특정 범위에 국한되지 않습니다.</span> ## 1.Usage - using Transformers ``` def exp_normalize(x): b = x.max() y = np.exp(x - b) return y / y.sum() from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path) model.eval() pairs = [["나는 너를 싫어해", "나는 너를 사랑해"], \ ["나는 너를 좋아해", "너에 대한 나의 감정은 사랑 일 수도 있어"]] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() scores = exp_normalize(scores.numpy()) print (f'first: {scores[0]}, second: {scores[1]}') ``` - using SageMaker ``` import sagemaker import boto3 from sagemaker.huggingface import HuggingFaceModel try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'Dongjin-kr/ko-reranker', 'HF_TASK':'text-classification' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.28.1', pytorch_version='2.0.0', py_version='py310', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.g5.large' # ec2 instance type ) runtime_client = boto3.Session().client('sagemaker-runtime') payload = json.dumps( { "inputs": [ {"text": "나는 너를 싫어해", "text_pair": "나는 너를 사랑해"}, {"text": "나는 너를 좋아해", "text_pair": "너에 대한 나의 감정은 사랑 일 수도 있어"} ] } ) response = runtime_client.invoke_endpoint( EndpointName="<endpoint-name>", ContentType="application/json", Accept="application/json", Body=payload ) ## deserialization out = json.loads(response['Body'].read().decode()) ## for json print (f'Response: {out}') ``` ## 2. Backgound - #### <span style="#FF69B4;"> **컨택스트 순서가 정확도에 영향 준다**([Lost in Middle, *Liu et al., 2023*](https://arxiv.org/pdf/2307.03172.pdf)) </span> - #### <span style="#FF69B4;"> [Reranker 사용해야 하는 이유](https://www.pinecone.io/learn/series/rag/rerankers/)</span> - 현재 LLM은 context 많이 넣는다고 좋은거 아님, relevant한게 상위에 있어야 정답을 잘 말해준다 - Semantic search에서 사용하는 similarity(relevant) score가 정교하지 않다. (즉, 상위 랭커면 하위 랭커보다 항상 더 질문에 유사한 정보가 맞아?) * Embedding은 meaning behind document를 가지는 것에 특화되어 있다. * 질문과 정답이 의미상 같은건 아니다. ([Hypothetical Document Embeddings](https://medium.com/prompt-engineering/hyde-revolutionising-search-with-hypothetical-document-embeddings-3474df795af8)) * ANNs([Approximate Nearest Neighbors](https://towardsdatascience.com/comprehensive-guide-to-approximate-nearest-neighbors-algorithms-8b94f057d6b6)) 사용에 따른 패널티 - - - ## 3. Reranker models - #### <span style="#FF69B4;"> [Cohere] [Reranker](https://txt.cohere.com/rerank/)</span> - #### <span style="#FF69B4;"> [BAAI] [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large)</span> - #### <span style="#FF69B4;"> [BAAI] [bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base)</span> - - - ## 4. Dataset - #### <span style="#FF69B4;"> [msmarco-triplets](https://github.com/microsoft/MSMARCO-Passage-Ranking) </span> - (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset, 499,184 samples - 해당 데이터 셋은 영문으로 구성되어 있습니다. - Amazon Translate 기반으로 번역하여 활용하였습니다. - #### <span style="#FF69B4;"> Format </span> ``` {"query": str, "pos": List[str], "neg": List[str]} ``` - Query는 질문이고, pos는 긍정 텍스트 목록, neg는 부정 텍스트 목록입니다. 쿼리에 대한 부정 텍스트가 없는 경우 전체 말뭉치에서 일부를 무작위로 추출하여 부정 텍스트로 사용할 수 있습니다. - #### <span style="#FF69B4;"> Example </span> ``` {"query": "대한민국의 수도는?", "pos": ["미국의 수도는 워싱턴이고, 일본은 도쿄이며 한국은 서울이다."], "neg": ["미국의 수도는 워싱턴이고, 일본은 도쿄이며 북한은 평양이다."]} ``` - - - ## 5. Performance | Model | has-right-in-contexts | mrr (mean reciprocal rank) | |:---------------------------|:-----------------:|:--------------------------:| | without-reranker (default)| 0.93 | 0.80 | | with-reranker (bge-reranker-large)| 0.95 | 0.84 | | **with-reranker (fine-tuned using korean)** | **0.96** | **0.87** | - **evaluation set**: ```code ./dataset/evaluation/eval_dataset.csv ``` - **training parameters**: ```json { "learning_rate": 5e-6, "fp16": True, "num_train_epochs": 3, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 32, "train_group_size": 3, "max_len": 512, "weight_decay": 0.01, } ``` - - - ## 6. Acknowledgement - <span style="#FF69B4;"> Part of the code is developed based on [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/tree/master?tab=readme-ov-file) and [KoSimCSE-SageMaker](https://github.com/daekeun-ml/KoSimCSE-SageMaker/tree/7de6eefef8f1a646c664d0888319d17480a3ebe5).</span> - - - ## 7. Citation - <span style="#FF69B4;"> If you find this repository useful, please consider giving a like ⭐ and citation</span> - - - ## 8. Contributors: - <span style="#FF69B4;"> **Dongjin Jang, Ph.D.** (AWS AI/ML Specislist Solutions Architect) | [Mail](mailto:dongjinj@amazon.com) | [Linkedin](https://www.linkedin.com/in/dongjin-jang-kr/) | [Git](https://github.com/dongjin-ml) | </span> - - - ## 9. License - <span style="#FF69B4;"> FlagEmbedding is licensed under the [MIT License](https://github.com/aws-samples/aws-ai-ml-workshop-kr/blob/master/LICENSE). </span> ## 10. Analytics - [![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fhuggingface.co%2FDongjin-kr%2Fko-reranker&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)
bartowski/Codestral-22B-v0.1-GGUF
bartowski
"2024-06-05T17:00:40Z"
53,144
138
null
[ "gguf", "code", "text-generation", "base_model:mistralai/Codestral-22B-v0.1", "license:other", "region:us" ]
text-generation
"2024-05-29T17:52:02Z"
--- inference: false license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code quantized_by: bartowski pipeline_tag: text-generation base_model: mistralai/Codestral-22B-v0.1 --- ## Llamacpp imatrix Quantizations of Codestral-22B-v0.1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization. Original model: https://huggingface.co/mistralai/Codestral-22B-v0.1 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <s> [INST] <<SYS>> {system_prompt} <</SYS>> {prompt} [/INST] </s> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Codestral-22B-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q8_0.gguf) | Q8_0 | 23.64GB | Extremely high quality, generally unneeded but max available quant. | | [Codestral-22B-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q6_K.gguf) | Q6_K | 18.25GB | Very high quality, near perfect, *recommended*. | | [Codestral-22B-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q5_K_M.gguf) | Q5_K_M | 15.72GB | High quality, *recommended*. | | [Codestral-22B-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q5_K_S.gguf) | Q5_K_S | 15.32GB | High quality, *recommended*. | | [Codestral-22B-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q4_K_M.gguf) | Q4_K_M | 13.34GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Codestral-22B-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q4_K_S.gguf) | Q4_K_S | 12.66GB | Slightly lower quality with more space savings, *recommended*. | | [Codestral-22B-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ4_XS.gguf) | IQ4_XS | 11.93GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Codestral-22B-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q3_K_L.gguf) | Q3_K_L | 11.73GB | Lower quality but usable, good for low RAM availability. | | [Codestral-22B-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q3_K_M.gguf) | Q3_K_M | 10.75GB | Even lower quality. | | [Codestral-22B-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ3_M.gguf) | IQ3_M | 10.06GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Codestral-22B-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q3_K_S.gguf) | Q3_K_S | 9.64GB | Low quality, not recommended. | | [Codestral-22B-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ3_XS.gguf) | IQ3_XS | 9.17GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Codestral-22B-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 8.59GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Codestral-22B-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-Q2_K.gguf) | Q2_K | 8.27GB | Very low quality but surprisingly usable. | | [Codestral-22B-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ2_M.gguf) | IQ2_M | 7.61GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Codestral-22B-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ2_S.gguf) | IQ2_S | 7.03GB | Very low quality, uses SOTA techniques to be usable. | | [Codestral-22B-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-GGUF/blob/main/Codestral-22B-v0.1-IQ2_XS.gguf) | IQ2_XS | 6.64GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Codestral-22B-v0.1-GGUF --include "Codestral-22B-v0.1-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Codestral-22B-v0.1-GGUF --include "Codestral-22B-v0.1-Q8_0.gguf/*" --local-dir Codestral-22B-v0.1-Q8_0 ``` You can either specify a new local-dir (Codestral-22B-v0.1-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
lpiccinelli/unidepth-v2old-vitl14
lpiccinelli
"2024-05-04T14:10:33Z"
52,790
0
UniDepth
[ "UniDepth", "pytorch", "safetensors", "monocular-metric-depth-estimation", "pytorch_model_hub_mixin", "model_hub_mixin", "region:us" ]
null
"2024-05-01T17:06:43Z"
--- library_name: UniDepth tags: - monocular-metric-depth-estimation - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/lpiccinelli-eth/UniDepth - Docs: [More Information Needed]
mradermacher/Qwen2-57B-A14B-Instruct-GGUF
mradermacher
"2024-06-23T12:30:56Z"
52,666
0
transformers
[ "transformers", "gguf", "chat", "en", "base_model:Qwen/Qwen2-57B-A14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T01:43:45Z"
--- base_model: Qwen/Qwen2-57B-A14B-Instruct language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct **The Qwen2-57B models seem to be broken. I have tried my best, but they likely need to be fixed upstream first. You have been warned.** <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q2_K.gguf) | Q2_K | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.IQ3_XS.gguf) | IQ3_XS | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.IQ3_S.gguf) | IQ3_S | 25.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.IQ3_M.gguf) | IQ3_M | 25.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 31.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 32.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 35.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 39.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 40.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q6_K.gguf) | Q6_K | 47.2 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-57B-A14B-Instruct-GGUF/resolve/main/Qwen2-57B-A14B-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 61.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
microsoft/BiomedVLP-CXR-BERT-specialized
microsoft
"2022-07-11T14:52:06Z"
52,644
18
transformers
[ "transformers", "pytorch", "cxr-bert", "feature-extraction", "exbert", "fill-mask", "custom_code", "en", "arxiv:2204.09817", "arxiv:2103.00020", "arxiv:2002.05709", "license:mit", "region:us" ]
fill-mask
"2022-05-11T17:20:52Z"
--- language: en tags: - exbert license: mit pipeline_tag: fill-mask widget: - text: "Left pleural effusion with adjacent [MASK]." example_title: "Radiology 1" - text: "Heart size normal and lungs are [MASK]." example_title: "Radiology 2" inference: false --- # CXR-BERT-specialized [CXR-BERT](https://arxiv.org/abs/2204.09817) is a chest X-ray (CXR) domain-specific language model that makes use of an improved vocabulary, novel pretraining procedure, weight regularization, and text augmentations. The resulting model demonstrates improved performance on radiology natural language inference, radiology masked language model token prediction, and downstream vision-language processing tasks such as zero-shot phrase grounding and image classification. First, we pretrain [**CXR-BERT-general**](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) from a randomly initialized BERT model via Masked Language Modeling (MLM) on abstracts [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and clinical notes from the publicly-available [MIMIC-III](https://physionet.org/content/mimiciii/1.4/) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/). In that regard, the general model is expected be applicable for research in clinical domains other than the chest radiology through domain specific fine-tuning. **CXR-BERT-specialized** is continually pretrained from CXR-BERT-general to further specialize in the chest X-ray domain. At the final stage, CXR-BERT is trained in a multi-modal contrastive learning framework, similar to the [CLIP](https://arxiv.org/abs/2103.00020) framework. The latent representation of [CLS] token is utilized to align text/image embeddings. ## Model variations | Model | Model identifier on HuggingFace | Vocabulary | Note | | ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- | --------------------------------------------------------- | | CXR-BERT-general | [microsoft/BiomedVLP-CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) | PubMed & MIMIC | Pretrained for biomedical literature and clinical domains | | CXR-BERT-specialized (after multi-modal training) | [microsoft/BiomedVLP-CXR-BERT-specialized](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized) | PubMed & MIMIC | Pretrained for chest X-ray domain | ## Image model **CXR-BERT-specialized** is jointly trained with a ResNet-50 image model in a multi-modal contrastive learning framework. Prior to multi-modal learning, the image model is pre-trained on the same set of images in MIMIC-CXR using [SimCLR](https://arxiv.org/abs/2002.05709). The corresponding model definition and its loading functions can be accessed through our [HI-ML-Multimodal](https://github.com/microsoft/hi-ml/blob/main/hi-ml-multimodal/src/health_multimodal/image/model/model.py) GitHub repository. The joint image and text model, namely [BioViL](https://arxiv.org/abs/2204.09817), can be used in phrase grounding applications as shown in this python notebook [example](https://mybinder.org/v2/gh/microsoft/hi-ml/HEAD?labpath=hi-ml-multimodal%2Fnotebooks%2Fphrase_grounding.ipynb). Additionally, please check the [MS-CXR benchmark](https://physionet.org/content/ms-cxr/0.1/) for a more systematic evaluation of joint image and text models in phrase grounding tasks. ## Citation The corresponding manuscript is accepted to be presented at the [**European Conference on Computer Vision (ECCV) 2022**](https://eccv2022.ecva.net/) ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.09817, doi = {10.48550/ARXIV.2204.09817}, url = {https://arxiv.org/abs/2204.09817}, author = {Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan}, title = {Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing}, publisher = {arXiv}, year = {2022}, } ``` ## Model Use ### Intended Use This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. #### Primary Intended Use The primary intended use is to support AI researchers building on top of this work. CXR-BERT and its associated models should be helpful for exploring various clinical NLP & VLP research questions, especially in the radiology domain. #### Out-of-Scope Use **Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://arxiv.org/abs/2204.09817) for more details. ### How to use Here is how to use this model to extract radiological sentence embeddings and obtain their cosine similarity in the joint space (image and text): ```python import torch from transformers import AutoModel, AutoTokenizer # Load the model and tokenizer url = "microsoft/BiomedVLP-CXR-BERT-specialized" tokenizer = AutoTokenizer.from_pretrained(url, trust_remote_code=True) model = AutoModel.from_pretrained(url, trust_remote_code=True) # Input text prompts (e.g., reference, synonym, contradiction) text_prompts = ["There is no pneumothorax or pleural effusion", "No pleural effusion or pneumothorax is seen", "The extent of the pleural effusion is constant."] # Tokenize and compute the sentence embeddings tokenizer_output = tokenizer.batch_encode_plus(batch_text_or_text_pairs=text_prompts, add_special_tokens=True, padding='longest', return_tensors='pt') embeddings = model.get_projected_text_embeddings(input_ids=tokenizer_output.input_ids, attention_mask=tokenizer_output.attention_mask) # Compute the cosine similarity of sentence embeddings obtained from input text prompts. sim = torch.mm(embeddings, embeddings.t()) ``` ## Data This model builds upon existing publicly-available datasets: - [PubMed](https://pubmed.ncbi.nlm.nih.gov/) - [MIMIC-III](https://physionet.org/content/mimiciii/) - [MIMIC-CXR](https://physionet.org/content/mimic-cxr/) These datasets reflect a broad variety of sources ranging from biomedical abstracts to intensive care unit notes to chest X-ray radiology notes. The radiology notes are accompanied with their associated chest x-ray DICOM images in MIMIC-CXR dataset. ## Performance We demonstrate that this language model achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports. A highlight of comparison to other common models, including [ClinicalBERT](https://aka.ms/clinicalbert) and [PubMedBERT](https://aka.ms/pubmedbert): | | RadNLI accuracy (MedNLI transfer) | Mask prediction accuracy | Avg. # tokens after tokenization | Vocabulary size | | ----------------------------------------------- | :-------------------------------: | :----------------------: | :------------------------------: | :-------------: | | RadNLI baseline | 53.30 | - | - | - | | ClinicalBERT | 47.67 | 39.84 | 78.98 (+38.15%) | 28,996 | | PubMedBERT | 57.71 | 35.24 | 63.55 (+11.16%) | 28,895 | | CXR-BERT (after Phase-III) | 60.46 | 77.72 | 58.07 (+1.59%) | 30,522 | | **CXR-BERT (after Phase-III + Joint Training)** | **65.21** | **81.58** | **58.07 (+1.59%)** | 30,522 | CXR-BERT also contributes to better vision-language representation learning through its improved text encoding capability. Below is the zero-shot phrase grounding performance on the **MS-CXR** dataset, which evaluates the quality of image-text latent representations. | Vision–Language Pretraining Method | Text Encoder | MS-CXR Phrase Grounding (Avg. CNR Score) | | ---------------------------------- | ------------ | :--------------------------------------: | | Baseline | ClinicalBERT | 0.769 | | Baseline | PubMedBERT | 0.773 | | ConVIRT | ClinicalBERT | 0.818 | | GLoRIA | ClinicalBERT | 0.930 | | **BioViL** | **CXR-BERT** | **1.027** | | **BioViL-L** | **CXR-BERT** | **1.142** | Additional details about performance can be found in the corresponding paper, [Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing](https://arxiv.org/abs/2204.09817). ## Limitations This model was developed using English corpora, and thus can be considered English-only. ## Further information Please refer to the corresponding paper, ["Making the Most of Text Semantics to Improve Biomedical Vision-Language Processing", ECCV'22](https://arxiv.org/abs/2204.09817) for additional details on the model training and evaluation. For additional inference pipelines with CXR-BERT, please refer to the [HI-ML-Multimodal GitHub](https://aka.ms/biovil-code) repository.
mradermacher/blossom-v5-32b-GGUF
mradermacher
"2024-07-02T18:41:34Z"
52,578
0
transformers
[ "transformers", "gguf", "zh", "en", "dataset:Azure99/blossom-chat-v3", "dataset:Azure99/blossom-math-v4", "dataset:Azure99/blossom-wizard-v3", "dataset:Azure99/blossom-orca-v3", "base_model:Azure99/blossom-v5-32b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-07-01T18:19:14Z"
--- base_model: Azure99/blossom-v5-32b datasets: - Azure99/blossom-chat-v3 - Azure99/blossom-math-v4 - Azure99/blossom-wizard-v3 - Azure99/blossom-orca-v3 language: - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Azure99/blossom-v5-32b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q2_K.gguf) | Q2_K | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.IQ3_XS.gguf) | IQ3_XS | 13.7 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q3_K_S.gguf) | Q3_K_S | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.IQ3_S.gguf) | IQ3_S | 14.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.IQ3_M.gguf) | IQ3_M | 14.8 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q3_K_L.gguf) | Q3_K_L | 17.2 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.IQ4_XS.gguf) | IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q4_K_S.gguf) | Q4_K_S | 18.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q4_K_M.gguf) | Q4_K_M | 19.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q5_K_S.gguf) | Q5_K_S | 22.6 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q5_K_M.gguf) | Q5_K_M | 23.2 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-GGUF/resolve/main/blossom-v5-32b.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt
naver
"2024-06-24T07:55:21Z"
52,426
0
dust3r
[ "dust3r", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "image-to-3d", "region:us" ]
image-to-3d
"2024-06-24T07:49:20Z"
--- tags: - pytorch_model_hub_mixin - model_hub_mixin - image-to-3d library_name: dust3r repo_url: https://github.com/naver/dust3r --- ## DUSt3R: Geometric 3D Vision Made Easy ```bibtex @inproceedings{dust3r_cvpr24, title={DUSt3R: Geometric 3D Vision Made Easy}, author={Shuzhe Wang and Vincent Leroy and Yohann Cabon and Boris Chidlovskii and Jerome Revaud}, booktitle = {CVPR}, year = {2024} } ``` # License The code is distributed under the CC BY-NC-SA 4.0 License. See [LICENSE](https://github.com/naver/dust3r/blob/main/LICENSE) for more information. For the checkpoints, make sure to agree to the license of all the public training datasets and base checkpoints we used, in addition to CC-BY-NC-SA 4.0. See [section: Our Hyperparameters](https://github.com/naver/dust3r?tab=readme-ov-file#our-hyperparameters) for details. # Model info Gihub page: https://github.com/naver/dust3r/ Project page: https://dust3r.europe.naverlabs.com/ | Modelname | Training resolutions | Head | Encoder | Decoder | |-------------|----------------------|------|---------|---------| | DUSt3R_ViTLarge_BaseDecoder_512_dpt | 512x384, 512x336, 512x288, 512x256, 512x160 | DPT | ViT-L | ViT-B | # How to use First, [install dust3r](https://github.com/naver/dust3r?tab=readme-ov-file#installation). To load the model: ```python from dust3r.model import AsymmetricCroCo3DStereo import torch model = AsymmetricCroCo3DStereo.from_pretrained("naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) ```
tohoku-nlp/bert-base-japanese-v3
tohoku-nlp
"2023-05-19T00:31:53Z"
52,378
40
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "pretraining", "ja", "dataset:cc100", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-05-19T00:13:53Z"
--- license: apache-2.0 datasets: - cc100 - wikipedia language: - ja widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (unidic-lite with whole word masking, CC-100 and jawiki-20230102) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization. Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The model is trained on the Japanese portion of [CC-100 dataset](https://data.statmt.org/cc-100/) and the Japanese version of Wikipedia. For Wikipedia, we generated a text corpus from the [Wikipedia Cirrussearch dump file](https://dumps.wikimedia.org/other/cirrussearch/) as of January 2, 2023. The corpus files generated from CC-100 and Wikipedia are 74.3GB and 4.9GB in size and consist of approximately 392M and 34M sentences, respectively. For the purpose of splitting texts into sentences, we used [fugashi](https://github.com/polm/fugashi) with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary (v0.0.7). ## Tokenization The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. We used [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://github.com/polm/unidic-lite) packages for the tokenization. ## Training We trained the model first on the CC-100 corpus for 1M steps and then on the Wikipedia corpus for another 1M steps. For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/). ## Licenses The pretrained models are distributed under the Apache License 2.0. ## Acknowledgments This model is trained with Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/) program.
sentence-transformers/sentence-t5-base
sentence-transformers
"2024-03-27T12:42:56Z"
52,361
43
sentence-transformers
[ "sentence-transformers", "pytorch", "rust", "safetensors", "t5", "feature-extraction", "sentence-similarity", "en", "arxiv:2108.08877", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity pipeline_tag: sentence-similarity --- # sentence-transformers/sentence-t5-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks. This model was converted from the Tensorflow model [st5-base-1](https://tfhub.dev/google/sentence-t5/st5-base/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-base model. The weights are stored in FP16. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/sentence-t5-base') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer. ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/sentence-t5-base) ## Citing & Authors If you find this model helpful, please cite the respective publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877)
cardiffnlp/twitter-roberta-base-emotion-multilabel-latest
cardiffnlp
"2023-05-28T05:08:45Z"
52,328
16
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "generated_from_keras_callback", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-01T21:14:01Z"
--- tags: - generated_from_keras_callback model-index: - name: twitter-roberta-base-emotion-multilabel-latest results: [] pipeline_tag: text-classification language: - en --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-emotion-multilabel-latest This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) on the [`SemEval 2018 - Task 1 Affect in Tweets`](https://aclanthology.org/S18-1001/) `(subtask: E-c / multilabel classification)`. ## Performance Following metrics are achieved on the test split: - F1 (micro): 0.7169 - F1 (macro): 0.5464 - Jaccard Index (samples): 0.5970: ### Usage #### 1. [tweetnlp](https://pypi.org/project/tweetnlp/) Install tweetnlp via pip. ```shell pip install tweetnlp ``` Load the model in python. ```python import tweetnlp model = tweetnlp.load_model('topic_classification', model_name='cardiffnlp/twitter-roberta-base-emotion-multilabel-latest') model.predict("I bet everything will work out in the end :)") >> {'label': ['joy', 'optimism']} ``` #### 2. pipeline ```shell pip install -U tensorflow==2.10 ``` ```python from transformers import pipeline pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-emotion-multilabel-latest", return_all_scores=True) pipe("I bet everything will work out in the end :)") >> [[{'label': 'anger', 'score': 0.018903767690062523}, {'label': 'anticipation', 'score': 0.28172484040260315}, {'label': 'disgust', 'score': 0.011607927270233631}, {'label': 'fear', 'score': 0.036411102861166}, {'label': 'joy', 'score': 0.8812029361724854}, {'label': 'love', 'score': 0.09591569006443024}, {'label': 'optimism', 'score': 0.9810988306999207}, {'label': 'pessimism', 'score': 0.016823478043079376}, {'label': 'sadness', 'score': 0.01889917254447937}, {'label': 'surprise', 'score': 0.02702752873301506}, {'label': 'trust', 'score': 0.4155798852443695}]] ``` ### Reference ``` @inproceedings{camacho-collados-etal-2022-tweetnlp, title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia}, author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{\'\i}nez-C{\'a}mara, Eugenio and others}, booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
cmarkea/distilcamembert-base-ner
cmarkea
"2024-06-12T13:37:16Z"
52,318
24
transformers
[ "transformers", "pytorch", "tf", "onnx", "safetensors", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- language: fr license: mit datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Boulanger, habitant à Boulanger et travaillant dans le magasin Boulanger situé dans la ville de Boulanger. Boulanger a écrit le livre éponyme Boulanger édité par la maison d'édition Boulanger." - text: "Quentin Jerome Tarantino naît le 27 mars 1963 à Knoxville, dans le Tennessee. Il est le fils de Connie McHugh, une infirmière, née le 3 septembre 1946, et de Tony Tarantino, acteur et musicien amateur né à New York. Ce dernier est d'origine italienne par son père ; sa mère a des ascendances irlandaises et cherokees. Il est prénommé d'après Quint Asper, le personnage joué par Burt Reynolds dans la série Gunsmoke et Quentin Compson, personnage du roman Le Bruit et la Fureur. Son père quitte le domicile familial avant même sa naissance. En 1965, sa mère déménage à Torrance, dans la banlieue sud de Los Angeles, et se remarie avec Curtis Zastoupil, un pianiste de bar, qui lui fait découvrir le cinéma. Le couple divorce alors que le jeune Quentin a une dizaine d'années." --- DistilCamemBERT-NER =================== We present DistilCamemBERT-NER, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the NER (Named Entity Recognition) task for the French language. The work is inspired by [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) based on the [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by two** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base). Dataset ------- The dataset used is [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr), which represents ~170k sentences labeled in 5 categories : * PER: personality ; * LOC: location ; * ORG: organization ; * MISC: miscellaneous entities (movies title, books, etc.) ; * O: background (Outside entity). Evaluation results ------------------ | **class** | **precision (%)** | **recall (%)** | **f1 (%)** | **support (#sub-word)** | | :------------: | :---------------: | :------------: | :--------: | :---------------------: | | **global** | 98.17 | 98.19 | 98.18 | 378,776 | | **PER** | 96.78 | 96.87 | 96.82 | 23,754 | | **LOC** | 94.05 | 93.59 | 93.82 | 27,196 | | **ORG** | 86.05 | 85.92 | 85.98 | 6,526 | | **MISC** | 88.78 | 84.69 | 86.69 | 11,891 | | **O** | 99.26 | 99.47 | 99.37 | 309,409 | Benchmark --------- This model performance is compared to 2 reference models (see below) with the metric f1 score. For the mean inference time measure, an AMD Ryzen 5 4500U @ 2.3GHz with 6 cores was used: | **model** | **time (ms)** | **PER (%)** | **LOC (%)** | **ORG (%)** | **MISC (%)** | **O (%)** | | :---------------------------------------------------------------------------------------------------------------: | :-----------: | :---------: | :---------: | :---------: | :-----------: | :-------: | | [cmarkea/distilcamembert-base-ner](https://huggingface.co/cmarkea/distilcamembert-base-ner) | **43.44** | **96.82** | **93.82** | **85.98** | **86.69** | **99.37** | | [Davlan/bert-base-multilingual-cased-ner-hrl](https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl) | 87.56 | 79.93 | 72.89 | 61.34 | n/a | 96.04 | | [flair/ner-french](https://huggingface.co/flair/ner-french) | 314.96 | 82.91 | 76.17 | 70.96 | 76.29 | 97.65 | How to use DistilCamemBERT-NER ------------------------------ ```python from transformers import pipeline ner = pipeline( task='ner', model="cmarkea/distilcamembert-base-ner", tokenizer="cmarkea/distilcamembert-base-ner", aggregation_strategy="simple" ) result = ner( "Le Crédit Mutuel Arkéa est une banque Française, elle comprend le CMB " "qui est une banque située en Bretagne et le CMSO qui est une banque " "qui se situe principalement en Aquitaine. C'est sous la présidence de " "Louis Lichou, dans les années 1980 que différentes filiales sont créées " "au sein du CMB et forment les principales filiales du groupe qui " "existent encore aujourd'hui (Federal Finance, Suravenir, Financo, etc.)." ) result [{'entity_group': 'ORG', 'score': 0.9974479, 'word': 'Crédit Mutuel Arkéa', 'start': 3, 'end': 22}, {'entity_group': 'LOC', 'score': 0.9000358, 'word': 'Française', 'start': 38, 'end': 47}, {'entity_group': 'ORG', 'score': 0.9788757, 'word': 'CMB', 'start': 66, 'end': 69}, {'entity_group': 'LOC', 'score': 0.99919766, 'word': 'Bretagne', 'start': 99, 'end': 107}, {'entity_group': 'ORG', 'score': 0.9594884, 'word': 'CMSO', 'start': 114, 'end': 118}, {'entity_group': 'LOC', 'score': 0.99935514, 'word': 'Aquitaine', 'start': 169, 'end': 178}, {'entity_group': 'PER', 'score': 0.99911094, 'word': 'Louis Lichou', 'start': 208, 'end': 220}, {'entity_group': 'ORG', 'score': 0.96226394, 'word': 'CMB', 'start': 291, 'end': 294}, {'entity_group': 'ORG', 'score': 0.9983959, 'word': 'Federal Finance', 'start': 374, 'end': 389}, {'entity_group': 'ORG', 'score': 0.9984454, 'word': 'Suravenir', 'start': 391, 'end': 400}, {'entity_group': 'ORG', 'score': 0.9985084, 'word': 'Financo', 'start': 402, 'end': 409}] ``` ### Optimum + ONNX ```python from optimum.onnxruntime import ORTModelForTokenClassification from transformers import AutoTokenizer, pipeline HUB_MODEL = "cmarkea/distilcamembert-base-nli" tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL) model = ORTModelForTokenClassification.from_pretrained(HUB_MODEL) onnx_qa = pipeline("token-classification", model=model, tokenizer=tokenizer) # Quantized onnx model quantized_model = ORTModelForTokenClassification.from_pretrained( HUB_MODEL, file_name="model_quantized.onnx" ) ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
sanali209/nsfwfilter
sanali209
"2023-08-29T01:19:05Z"
52,147
14
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-08-21T13:44:01Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: sanali209/nsfwfilter results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9273858666419983 --- # sanali209/nsfwfilter Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images
flair/ner-english-fast
flair
"2021-02-26T15:39:34Z"
52,000
18
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:conll2003", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 widget: - text: "George Washington went to Washington" --- ## English NER in Flair (fast model) This is the fast 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,92** (corrected CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-fast") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9515)] Span [5]: "Washington" [− Labels: LOC (0.992)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03 from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
mradermacher/Swallow-70b-NVE-instruct-hf-GGUF
mradermacher
"2024-06-30T18:20:10Z"
51,938
0
transformers
[ "transformers", "gguf", "en", "ja", "base_model:tokyotech-llm/Swallow-70b-NVE-instruct-hf", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-06-30T01:32:23Z"
--- base_model: tokyotech-llm/Swallow-70b-NVE-instruct-hf language: - en - ja library_name: transformers license: llama2 model_type: llama quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-NVE-instruct-hf-GGUF/resolve/main/Swallow-70b-NVE-instruct-hf.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
nvidia/segformer-b0-finetuned-ade-512-512
nvidia
"2024-01-14T10:46:46Z"
51,937
124
transformers
[ "transformers", "pytorch", "tf", "safetensors", "segformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-03-02T23:29:05Z"
--- license: other tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b0-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation from PIL import Image import requests processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
flair/ner-english-ontonotes-fast
flair
"2023-04-05T20:14:18Z"
51,924
19
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:ontonotes", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - ontonotes widget: - text: "On September 1st George Washington won 1 dollar." --- ## English NER in Flair (Ontonotes fast model) This is the fast version of the 18-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **89.3** (Ontonotes) Predicts 18 tags: | **tag** | **meaning** | |---------------------------------|-----------| | CARDINAL | cardinal value | | DATE | date value | | EVENT | event name | | FAC | building name | | GPE | geo-political entity | | LANGUAGE | language name | | LAW | law name | | LOC | location name | | MONEY | money name | | NORP | affiliation | | ORDINAL | ordinal value | | ORG | organization name | | PERCENT | percent value | | PERSON | person name | | PRODUCT | product name | | QUANTITY | quantity value | | TIME | time value | | WORK_OF_ART | name of work of art | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-english-ontonotes-fast") # make example sentence sentence = Sentence("On September 1st George Washington won 1 dollar.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [2,3]: "September 1st" [− Labels: DATE (0.9655)] Span [4,5]: "George Washington" [− Labels: PERSON (0.8243)] Span [7,8]: "1 dollar" [− Labels: MONEY (0.8022)] ``` So, the entities "*September 1st*" (labeled as a **date**), "*George Washington*" (labeled as a **person**) and "*1 dollar*" (labeled as a **money**) are found in the sentence "*On September 1st George Washington won 1 dollar*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import ColumnCorpus from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself) corpus: Corpus = ColumnCorpus( "resources/tasks/onto-ner", column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"}, tag_to_bioes="ner", ) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('en-crawl'), # contextual string embeddings, forward FlairEmbeddings('news-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('news-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-english-ontonotes-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Qwen/Qwen2-7B-Instruct-GGUF
Qwen
"2024-06-17T16:34:14Z"
51,885
105
null
[ "gguf", "chat", "text-generation", "en", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-06T13:18:05Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-7B-Instruct-GGUF ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). In this repo, we provide `fp16` model and quantized models in the GGUF formats, including `q5_0`, `q5_k_m`, `q6_k` and `q8_0`. ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository `llama.cpp`. ## How to use Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below: ```shell huggingface-cli download Qwen/Qwen2-7B-Instruct-GGUF qwen2-7b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False ``` To run Qwen2, you can use `llama-cli` (the previous `main`) or `llama-server` (the previous `server`). We recommend using the `llama-server` as it is simple and compatible with OpenAI API. For example: ```bash ./llama-server -m qwen2-7b-instruct-q5_k_m.gguf -ngl 28 -fa ``` (Note: `-ngl 28` refers to offloading 24 layers to GPUs, and `-fa` refers to the use of flash attention.) Then it is easy to access the deployed service with OpenAI API: ```python import openai client = openai.OpenAI( base_url="http://localhost:8080/v1", # "http://<Your api-server IP>:port" api_key = "sk-no-key-required" ) completion = client.chat.completions.create( model="qwen", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "tell me something about michael jordan"} ] ) print(completion.choices[0].message.content) ``` If you choose to use `llama-cli`, pay attention to the removal of `-cml` for the ChatML template. Instead you should use `--in-prefix` and `--in-suffix` to tackle this problem. ```bash ./llama-cli -m qwen2-7b-instruct-q5_k_m.gguf \ -n 512 -co -i -if -f prompts/chat-with-qwen.txt \ --in-prefix "<|im_start|>user\n" \ --in-suffix "<|im_end|>\n<|im_start|>assistant\n" \ -ngl 24 -fa ``` ## Evaluation We implement perplexity evaluation using wikitext following the practice of `llama.cpp` with `./llama-perplexity` (the previous `./perplexity`). In the following we report the PPL of GGUF models of different sizes and different quantization levels. |Size | fp16 | q8_0 | q6_k | q5_k_m | q5_0 | q4_k_m | q4_0 | q3_k_m | q2_k | iq1_m | |--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------| |0.5B | 15.11 | 15.13 | 15.14 | 15.24 | 15.40 | 15.36 | 16.28 | 15.70 | 16.74 | - | |1.5B | 10.43 | 10.43 | 10.45 | 10.50 | 10.56 | 10.61 | 10.79 | 11.08 | 13.04 | - | |7B | 7.93 | 7.94 | 7.96 | 7.97 | 7.98 | 8.02 | 8.19 | 8.20 | 10.58 | - | |57B-A14B| 6.81 | 6.81 | 6.83 | 6.84 | 6.89 | 6.99 | 7.02 | 7.43 | - | - | |72B | 5.58 | 5.58 | 5.59 | 5.59 | 5.60 | 5.61 | 5.66 | 5.68 | 5.91 | 6.75 | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
lllyasviel/control_v11p_sd15_inpaint
lllyasviel
"2023-05-26T09:58:45Z"
51,878
89
diffusers
[ "diffusers", "safetensors", "art", "controlnet", "stable-diffusion", "controlnet-v1-1", "image-to-image", "arxiv:2302.05543", "base_model:runwayml/stable-diffusion-v1-5", "license:openrail", "region:us" ]
image-to-image
"2023-04-14T19:25:42Z"
--- license: openrail base_model: runwayml/stable-diffusion-v1-5 tags: - art - controlnet - stable-diffusion - controlnet-v1-1 - image-to-image duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_inpaint --- # Controlnet - v1.1 - *InPaint Version* **Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel). This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_inpaint.pth) into `diffusers` format. It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet). ControlNet is a neural network structure to control diffusion models by adding extra conditions. ![img](./sd.png) This checkpoint corresponds to the ControlNet conditioned on **inpaint images**. ## Model Details - **Developed by:** Lvmin Zhang, Maneesh Agrawala - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543). - **Cite as:** @misc{zhang2023adding, title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang and Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} } ## Introduction Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by Lvmin Zhang, Maneesh Agrawala. The abstract reads as follows: *We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.* ## Example It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion. 1. Let's install `diffusers` and related packages: ``` $ pip install diffusers transformers accelerate ``` 2. Run code: ```python # !pip install transformers accelerate from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel from diffusers.utils import load_image import numpy as np import torch init_image = load_image( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" ) init_image = init_image.resize((512, 512)) generator = torch.Generator(device="cpu").manual_seed(1) mask_image = load_image( "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" ) mask_image = mask_image.resize((512, 512)) def make_inpaint_condition(image, image_mask): image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size" image[image_mask > 0.5] = -1.0 # set as masked pixel image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) image = torch.from_numpy(image) return image control_image = make_inpaint_condition(init_image, mask_image) controlnet = ControlNetModel.from_pretrained( "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 ) pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() # generate image image = pipe( "a handsome man with ray-ban sunglasses", num_inference_steps=20, generator=generator, eta=1.0, image=init_image, mask_image=mask_image, control_image=control_image, ).images[0] ``` ![original](https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png) ![img](https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_with_mask.png) ![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/boy_ray_ban.png) ## Other released checkpoints v1-1 The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on a different type of conditioning: | Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example | |---|---|---|---|---| |[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>| |[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>| ## More information For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
ai4bharat/IndicNER
ai4bharat
"2022-12-21T02:45:48Z"
51,827
15
transformers
[ "transformers", "pytorch", "bert", "token-classification", "ner", "Pytorch", "transformer", "multilingual", "nlp", "indicnlp", "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "dataset:Samanantar", "arxiv:2212.10168", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-05-23T11:12:43Z"
--- language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: mit datasets: - Samanantar tags: - ner - Pytorch - transformer - multilingual - nlp - indicnlp --- # IndicNER IndicNER is a model trained to complete the task of identifying named entities from sentences in Indian languages. Our model is specifically fine-tuned to the 11 Indian languages mentioned above over millions of sentences. The model is then benchmarked over a human annotated testset and multiple other publicly available Indian NER datasets. The 11 languages covered by IndicNER are: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. ## Training Corpus Our model was trained on a [dataset](https://huggingface.co/datasets/ai4bharat/naamapadam) which we mined from the existing [Samanantar Corpus](https://huggingface.co/datasets/ai4bharat/samanantar). We used a bert-base-multilingual-uncased model as the starting point and then fine-tuned it to the NER dataset mentioned previously. ## Downloads Download from this same Huggingface repo. Update 20 Dec 2022: We released a new paper documenting IndicNER and Naamapadam. We have a different model reported in the paper. We will update the repo here soon with this model. ## Usage You can use [this Colab notebook](https://colab.research.google.com/drive/1sYa-PDdZQ_c9SzUgnhyb3Fl7j96QBCS8?usp=sharing) for samples on using IndicNER or for finetuning a pre-trained model on Naampadam dataset to build your own NER models. <!-- citing information --> ## Citing If you are using IndicNER, please cite the following article: ``` @misc{mhaske2022naamapadam, doi = {10.48550/ARXIV.2212.10168}, url = {https://arxiv.org/abs/2212.10168}, author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop}, title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages} publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` We would like to hear from you if: - You are using our resources. Please let us know how you are putting these resources to use. - You have any feedback on these resources. <!-- License --> ## License The IndicNER code (and models) are released under the MIT License. <!-- Contributors --> ## Contributors - Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub> - Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub> - Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub> - Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub> - Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub> - Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub> - Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub> This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in). <!-- Contact --> ## Contact - Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com)) - Rudra Murthy V ([rmurthyv@in.ibm.com](mailto:rmurthyv@in.ibm.com))
mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF
mradermacher
"2024-06-27T11:48:51Z"
51,826
0
transformers
[ "transformers", "gguf", "en", "dataset:DrNicefellow/CHAT-ALL-IN-ONE-v1", "base_model:DrNicefellow/ChatAllInOne-Yi-34B-200K-V1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-27T02:53:05Z"
--- base_model: DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 datasets: - DrNicefellow/CHAT-ALL-IN-ONE-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ChatAllInOne-Yi-34B-200K-V1-GGUF/resolve/main/ChatAllInOne-Yi-34B-200K-V1.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
unsloth/mistral-7b-v0.3-bnb-4bit
unsloth
"2024-05-22T18:24:27Z"
51,770
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "mistral-7b", "mistral-instruct", "instruct", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-05-22T18:13:59Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - mistral - mistral-7b - mistral-instruct - instruct --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
facebook/wav2vec2-large-960h-lv60-self
facebook
"2022-05-23T16:13:42Z"
51,737
120
transformers
[ "transformers", "pytorch", "tf", "jax", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2010.11430", "arxiv:2006.11477", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: wav2vec2-large-960h-lv60 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.9 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.9 --- # Wav2Vec2-Large-960h-Lv60 + Self-Training [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.9 | 3.9 |
legraphista/gemma-2-9b-it-IMat-GGUF
legraphista
"2024-06-30T10:57:26Z"
51,725
2
gguf
[ "gguf", "conversational", "quantized", "GGUF", "quantization", "imat", "imatrix", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "base_model:google/gemma-2-9b-it", "license:gemma", "region:us" ]
text-generation
"2024-06-27T17:19:15Z"
--- base_model: google/gemma-2-9b-it extra_gated_button_content: Acknowledge license extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: "To access Gemma on Hugging Face, you\u2019re required to review\ \ and agree to Google\u2019s usage license. To do this, please ensure you\u2019\ re logged in to Hugging Face and click below. Requests are processed immediately." inference: false library_name: gguf license: gemma pipeline_tag: text-generation quantized_by: legraphista tags: - conversational - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # gemma-2-9b-it-IMat-GGUF _Llama.cpp imatrix quantization of google/gemma-2-9b-it_ Original Model: [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3266](https://github.com/ggerganov/llama.cpp/releases/tag/b3266) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [gemma-2-9b-it.Q8_0.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q8_0.gguf) | Q8_0 | 9.83GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q6_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q6_K.gguf) | Q6_K | 7.59GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q4_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q4_K.gguf) | Q4_K | 5.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q3_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q3_K.gguf) | Q3_K | 4.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q2_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q2_K.gguf) | Q2_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [gemma-2-9b-it.BF16.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.BF16.gguf) | BF16 | 18.49GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.FP16.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.FP16.gguf) | F16 | 18.49GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q8_0.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q8_0.gguf) | Q8_0 | 9.83GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q6_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q6_K.gguf) | Q6_K | 7.59GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q5_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q5_K.gguf) | Q5_K | 6.65GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q5_K_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q5_K_S.gguf) | Q5_K_S | 6.48GB | ✅ Available | ⚪ Static | 📦 No | [gemma-2-9b-it.Q4_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q4_K.gguf) | Q4_K | 5.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q4_K_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q4_K_S.gguf) | Q4_K_S | 5.48GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ4_NL.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ4_NL.gguf) | IQ4_NL | 5.44GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ4_XS.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ4_XS.gguf) | IQ4_XS | 5.18GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q3_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q3_K.gguf) | Q3_K | 4.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q3_K_L.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q3_K_L.gguf) | Q3_K_L | 5.13GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q3_K_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q3_K_S.gguf) | Q3_K_S | 4.34GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ3_M.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ3_M.gguf) | IQ3_M | 4.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ3_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ3_S.gguf) | IQ3_S | 4.34GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ3_XS.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ3_XS.gguf) | IQ3_XS | 4.14GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ3_XXS.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ3_XXS.gguf) | IQ3_XXS | 3.80GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q2_K.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q2_K.gguf) | Q2_K | 3.81GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.Q2_K_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.Q2_K_S.gguf) | Q2_K_S | 3.55GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ2_M.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ2_M.gguf) | IQ2_M | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ2_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ2_S.gguf) | IQ2_S | 3.21GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ2_XS.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ2_XS.gguf) | IQ2_XS | 3.07GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ2_XXS.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ2_XXS.gguf) | IQ2_XXS | 2.82GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ1_M.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ1_M.gguf) | IQ1_M | 2.55GB | ✅ Available | 🟢 IMatrix | 📦 No | [gemma-2-9b-it.IQ1_S.gguf](https://huggingface.co/legraphista/gemma-2-9b-it-IMat-GGUF/blob/main/gemma-2-9b-it.IQ1_S.gguf) | IQ1_S | 2.38GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/gemma-2-9b-it-IMat-GGUF --include "gemma-2-9b-it.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <bos><start_of_turn>user {user_prompt}<end_of_turn> <start_of_turn>model {assistant_response}<end_of_turn> <start_of_turn>user {next_user_prompt}<end_of_turn> ``` ### Llama.cpp ``` llama.cpp/main -m gemma-2-9b-it.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `gemma-2-9b-it.Q8_0`) 3. Run `gguf-split --merge gemma-2-9b-it.Q8_0/gemma-2-9b-it.Q8_0-00001-of-XXXXX.gguf gemma-2-9b-it.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
SimianLuo/LCM_Dreamshaper_v7
SimianLuo
"2024-03-05T08:32:22Z"
51,427
364
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "en", "arxiv:2310.04378", "license:mit", "diffusers:LatentConsistencyModelPipeline", "region:us" ]
text-to-image
"2023-10-14T08:26:52Z"
--- license: mit language: - en pipeline_tag: text-to-image tags: - text-to-image --- # Latent Consistency Models Official Repository of the paper: *[Latent Consistency Models](https://arxiv.org/abs/2310.04378)*. Project Page: https://latent-consistency-models.github.io ## Try our Hugging Face demos: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model) ## Model Descriptions: Distilled from [Dreamshaper v7](https://huggingface.co/Lykon/dreamshaper-7) fine-tune of [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with only 4,000 training iterations (~32 A100 GPU Hours). ## Generation Results: <p align="center"> <img src="teaser.png"> </p> By distilling classifier-free guidance into the model's input, LCM can generate high-quality images in very short inference time. We compare the inference time at the setting of 768 x 768 resolution, CFG scale w=8, batchsize=4, using a A800 GPU. <p align="center"> <img src="speed_fid.png"> </p> ## Usage You can try out Latency Consistency Models directly on: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model) To run the model yourself, you can leverage the 🧨 Diffusers library: 1. Install the library: ``` pip install --upgrade diffusers # make sure to use at least diffusers >= 0.22 pip install transformers accelerate ``` 2. Run the model: ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float32) prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. num_inference_steps = 4 images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images ``` For more information, please have a look at the official docs: 👉 https://huggingface.co/docs/diffusers/api/pipelines/latent_consistency_models#latent-consistency-models ## Usage (Deprecated) 1. Install the library: ``` pip install diffusers transformers accelerate ``` 2. Run the model: ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main", revision="fb9c5d") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float32) prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. num_inference_steps = 4 images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, output_type="pil").images ``` ## BibTeX ```bibtex @misc{luo2023latent, title={Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference}, author={Simian Luo and Yiqin Tan and Longbo Huang and Jian Li and Hang Zhao}, year={2023}, eprint={2310.04378}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
mradermacher/Fook-Yi-34B-32K-v1a-GGUF
mradermacher
"2024-06-28T05:38:19Z"
51,403
0
transformers
[ "transformers", "gguf", "en", "base_model:BeaverAI/Fook-Yi-34B-32K-v1a", "endpoints_compatible", "region:us" ]
null
"2024-06-28T03:28:13Z"
--- base_model: BeaverAI/Fook-Yi-34B-32K-v1a language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/BeaverAI/Fook-Yi-34B-32K-v1a <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fook-Yi-34B-32K-v1a-GGUF/resolve/main/Fook-Yi-34B-32K-v1a.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Yi-1.5-34B-Chat-16K-GGUF
mradermacher
"2024-06-26T21:33:33Z"
51,306
0
transformers
[ "transformers", "gguf", "en", "base_model:01-ai/Yi-1.5-34B-Chat-16K", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T16:04:21Z"
--- base_model: 01-ai/Yi-1.5-34B-Chat-16K language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-Chat-16K-GGUF/resolve/main/Yi-1.5-34B-Chat-16K.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
unsloth/mistral-7b-instruct-v0.2-bnb-4bit
unsloth
"2024-03-22T15:18:00Z"
51,285
26
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "unsloth", "mistral-7b", "mistral-instruct", "instruct", "bnb", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-01-21T15:24:24Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - mistral - mistral-7b - mistral-instruct - instruct - bnb --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Mistral 7b here: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
basel/ATTACK-BERT
basel
"2023-08-10T18:10:31Z"
51,249
4
transformers
[ "transformers", "pytorch", "mpnet", "feature-extraction", "cybersecurity", "sentence-embedding", "sentence-similarity", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-04-14T21:07:49Z"
--- pipeline_tag: sentence-similarity tags: - cybersecurity - sentence-embedding - sentence-similarity --- # ATT&CK BERT: a Cybersecurity Language Model ATT&CK BERT is a cybersecurity domain-specific language model based on [sentence-transformers](https://www.SBERT.net). ATT&CK BERT maps sentences representing attack actions to a semantically meaningful embedding vector. Embedding vectors of sentences with similar meanings have a high cosine similarity. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Attacker takes a screenshot", "Attacker captures the screen"] model = SentenceTransformer('basel/ATTACK-BERT') embeddings = model.encode(sentences) from sklearn.metrics.pairwise import cosine_similarity print(cosine_similarity([embeddings[0]], [embeddings[1]])) ``` To use ATT&CK BERT to map text to ATT&CK techniques Check our tool SMET: https://github.com/basel-a/SMET
kwoncho/losscut_news_pre2018_2
kwoncho
"2024-06-05T04:28:30Z"
51,229
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-05T04:27:27Z"
Entry not found
katuni4ka/tiny-random-snowflake
katuni4ka
"2024-05-28T06:49:46Z"
51,216
0
transformers
[ "transformers", "safetensors", "arctic", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
"2024-05-28T06:29:02Z"
Entry not found
akhooli/xlm-r-large-arabic-toxic
akhooli
"2020-12-11T21:32:20Z"
51,178
3
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "ar", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - ar - en license: mit --- ### xlm-r-large-arabic-toxic (toxic/hate speech classifier) Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
second-state/gemma-2-27b-it-GGUF
second-state
"2024-07-01T13:20:56Z"
50,974
1
transformers
[ "transformers", "gguf", "gemma2", "text-generation", "conversational", "base_model:google/gemma-2-27b-it", "license:gemma", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-28T02:54:46Z"
--- base_model: google/gemma-2-27b-it inference: false license: gemma library_name: transformers pipeline_tag: text-generation model_creator: Google model_name: gemma-2-27b-it quantized_by: Second State Inc. tags: - conversational --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Gemma-2-27b-it-GGUF ## Original Model [google/gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) ## Run with LlamaEdge - LlamaEdge version: [v0.12.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.1) and above - Prompt template - Prompt type: `gemma-instruct` - Prompt string ```text <bos><start_of_turn>user {user_message}<end_of_turn> <start_of_turn>model {model_message}<end_of_turn>model ``` - Context size: `8192` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-2-27b-it-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template gemma-instruct \ --ctx-size 8192 \ --model-name gemma-2-27b ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. \ --nn-preload default:GGML:AUTO:gemma-2-27b-it-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template gemma-instruct \ --ctx-size 8192 ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [gemma-2-27b-it-Q2_K.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q2_K.gguf) | Q2_K | 2 | 10.4 GB| smallest, significant quality loss - not recommended for most purposes | | [gemma-2-27b-it-Q3_K_L.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q3_K_L.gguf) | Q3_K_L | 3 | 14.5 GB| small, substantial quality loss | | [gemma-2-27b-it-Q3_K_M.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q3_K_M.gguf) | Q3_K_M | 3 | 13.4 GB| very small, high quality loss | | [gemma-2-27b-it-Q3_K_S.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q3_K_S.gguf) | Q3_K_S | 3 | 12.2 GB| very small, high quality loss | | [gemma-2-27b-it-Q4_0.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q4_0.gguf) | Q4_0 | 4 | 15.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [gemma-2-27b-it-Q4_K_M.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q4_K_M.gguf) | Q4_K_M | 4 | 16.6 GB| medium, balanced quality - recommended | | [gemma-2-27b-it-Q4_K_S.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q4_K_S.gguf) | Q4_K_S | 4 | 15.7 GB| small, greater quality loss | | [gemma-2-27b-it-Q5_0.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_0.gguf) | Q5_0 | 5 | 18.9 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [gemma-2-27b-it-Q5_K_M.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_M.gguf) | Q5_K_M | 5 | 19.4 GB| large, very low quality loss - recommended | | [gemma-2-27b-it-Q5_K_S.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q5_K_S.gguf) | Q5_K_S | 5 | 18.9 GB| large, low quality loss - recommended | | [gemma-2-27b-it-Q6_K.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q6_K.gguf) | Q6_K | 6 | 22.3 GB| very large, extremely low quality loss | | [gemma-2-27b-it-Q8_0.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-Q8_0.gguf) | Q8_0 | 8 | 28.9 GB| very large, extremely low quality loss - not recommended | | [gemma-2-27b-it-f16-00001-of-00002.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-f16-00001-of-00002.gguf) | f16 | 16 | 29.9 GB| | | [gemma-2-27b-it-f16-00002-of-00002.gguf](https://huggingface.co/second-state/gemma-2-27b-it-GGUF/blob/main/gemma-2-27b-it-f16-00002-of-00002.gguf) | f16 | 16 | 24.6 GB| | *Quantized with llama.cpp b3259*
upskyy/kf-deberta-multitask
upskyy
"2024-01-15T01:32:03Z"
50,901
13
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "deberta-v2", "feature-extraction", "sentence-similarity", "transformers", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-01-14T14:46:23Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: ko --- # kf-deberta-multitask This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. You can check the training recipes on [GitHub](https://github.com/upskyy/kf-deberta-multitask). <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."] model = SentenceTransformer("upskyy/kf-deberta-multitask") embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("upskyy/kf-deberta-multitask") model = AutoModel.from_pretrained("upskyy/kf-deberta-multitask") # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다. - Cosine Pearson: 85.75 - Cosine Spearman: 86.25 - Manhattan Pearson: 84.80 - Manhattan Spearman: 85.27 - Euclidean Pearson: 84.79 - Euclidean Spearman: 85.25 - Dot Pearson: 82.93 - Dot Spearman: 82.86 <br> |model|cosine_pearson|cosine_spearman|euclidean_pearson|euclidean_spearman|manhattan_pearson|manhattan_spearman|dot_pearson|dot_spearman| |:-------------------------|-----------------:|------------------:|--------------------:|---------------------:|--------------------:|---------------------:|--------------:|---------------:| |[kf-deberta-multitask](https://huggingface.co/upskyy/kf-deberta-multitask)|**85.75**|**86.25**|**84.79**|**85.25**|**84.80**|**85.27**|**82.93**|**82.86**| |[ko-sroberta-multitask](https://huggingface.co/jhgan/ko-sroberta-multitask)|84.77|85.6|83.71|84.40|83.70|84.38|82.42|82.33| |[ko-sbert-multitask](https://huggingface.co/jhgan/ko-sbert-multitask)|84.13|84.71|82.42|82.66|82.41|82.69|80.05|79.69| |[ko-sroberta-base-nli](https://huggingface.co/jhgan/ko-sroberta-nli)|82.83|83.85|82.87|83.29|82.88|83.28|80.34|79.69| |[ko-sbert-nli](https://huggingface.co/jhgan/ko-sbert-multitask)|82.24|83.16|82.19|82.31|82.18|82.3|79.3|78.78| |[ko-sroberta-sts](https://huggingface.co/jhgan/ko-sroberta-sts)|81.84|81.82|81.15|81.25|81.14|81.25|79.09|78.54| |[ko-sbert-sts](https://huggingface.co/jhgan/ko-sbert-sts)|81.55|81.23|79.94|79.79|79.9|79.75|76.02|75.31| <br> ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4442 with parameters: ``` {'batch_size': 128} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 719 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 719, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DebertaV2Model (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> ```bibtex @proceedings{jeon-etal-2023-kfdeberta, title = {KF-DeBERTa: Financial Domain-specific Pre-trained Language Model}, author = {Eunkwang Jeon, Jungdae Kim, Minsang Song, and Joohyun Ryu}, booktitle = {Proceedings of the 35th Annual Conference on Human and Cognitive Language Technology}, moth = {oct}, year = {2023}, publisher = {Korean Institute of Information Scientists and Engineers}, url = {http://www.hclt.kr/symp/?lnb=conference}, pages = {143--148}, } ``` ```bibtex @article{ham2020kornli, title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding}, author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon}, journal={arXiv preprint arXiv:2004.03289}, year={2020} } ```
vinai/phobert-base
vinai
"2022-10-22T08:56:25Z"
50,890
40
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "fill-mask", "arxiv:2003.00744", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
# <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam): - Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance. - PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744): @article{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, journal = {Findings of EMNLP}, year = {2020} } **Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software. For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
aari1995/German_Semantic_STS_V2
aari1995
"2024-06-28T09:25:07Z"
50,876
39
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "gBERT-large", "sentence-similarity", "transformers", "RAG", "retrieval augmented generation", "STS", "MTEB", "mteb", "de", "dataset:stsb_multi_mt", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-11-17T09:57:45Z"
--- pipeline_tag: sentence-similarity language: - de datasets: - stsb_multi_mt tags: - gBERT-large - sentence-transformers - feature-extraction - sentence-similarity - transformers - RAG - retrieval augmented generation - STS - MTEB - mteb model-index: - name: German_Semantic_STS_V2 results: - dataset: config: de name: MTEB AmazonCounterfactualClassification revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 67.00214132762312 task: type: Classification - dataset: config: de name: MTEB AmazonCounterfactualClassification revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: validation type: mteb/amazon_counterfactual metrics: - type: accuracy value: 68.43347639484978 task: type: Classification - dataset: config: de name: MTEB AmazonReviewsClassification revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 39.092 task: type: Classification - dataset: config: de name: MTEB AmazonReviewsClassification revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: validation type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 39.146000000000003 task: type: Classification - dataset: config: default name: MTEB BlurbsClusteringP2P revision: a2dd5b02a77de3466a3eaa98ae586b5610314496 split: test type: slvnwhrl/blurbs-clustering-p2p metrics: - type: v_measure value: 38.680981669842135 task: type: Clustering - dataset: config: default name: MTEB BlurbsClusteringS2S revision: 22793b6a6465bf00120ad525e38c51210858132c split: test type: slvnwhrl/blurbs-clustering-s2s metrics: - type: v_measure value: 17.624489937027504 task: type: Clustering - dataset: config: default name: MTEB GermanDPR revision: 5129d02422a66be600ac89cd3e8531b4f97d347d split: test type: deepset/germandpr metrics: - type: ndcg_at_10 value: 72.921 task: type: Retrieval - dataset: config: default name: MTEB GermanQuAD-Retrieval revision: f5c87ae5a2e7a5106606314eef45255f03151bb3 split: test type: mteb/germanquad-retrieval metrics: - type: mrr_at_5 value: 85.316 task: type: Retrieval - dataset: config: default name: MTEB GermanSTSBenchmark revision: e36907544d44c3a247898ed81540310442329e20 split: test type: jinaai/german-STSbenchmark metrics: - type: cos_sim_spearman value: 84.67696933608695 task: type: STS - dataset: config: default name: MTEB GermanSTSBenchmark revision: e36907544d44c3a247898ed81540310442329e20 split: validation type: jinaai/german-STSbenchmark metrics: - type: cos_sim_spearman value: 88.048957974805 task: type: STS - dataset: config: de name: MTEB MassiveIntentClassification revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 66.25084061869536 task: type: Classification - dataset: config: de name: MTEB MassiveIntentClassification revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 66.44859813084113 task: type: Classification - dataset: config: de name: MTEB MassiveScenarioClassification revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 72.51176866173503 task: type: Classification - dataset: config: de name: MTEB MassiveScenarioClassification revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 72.02164289227742 task: type: Classification - dataset: config: de name: MTEB MTOPDomainClassification revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 89.00253592561285 task: type: Classification - dataset: config: de name: MTEB MTOPDomainClassification revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: validation type: mteb/mtop_domain metrics: - type: accuracy value: 87.70798898071626 task: type: Classification - dataset: config: de name: MTEB MTOPIntentClassification revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 70.06198929275853 task: type: Classification - dataset: config: de name: MTEB MTOPIntentClassification revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: validation type: mteb/mtop_intent metrics: - type: accuracy value: 68.6060606060606 task: type: Classification - dataset: config: de name: MTEB PawsX revision: 8a04d940a42cd40658986fdd8e3da561533a3646 split: test type: google-research-datasets/paws-x metrics: - type: ap value: 57.47670853851811 task: type: PairClassification - dataset: config: de name: MTEB PawsX revision: 8a04d940a42cd40658986fdd8e3da561533a3646 split: validation type: google-research-datasets/paws-x metrics: - type: ap value: 52.85587710877178 task: type: PairClassification - dataset: config: de name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_spearman value: 50.63839763951755 task: type: STS - dataset: config: default name: MTEB TenKGnadClusteringP2P revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558 split: test type: slvnwhrl/tenkgnad-clustering-p2p metrics: - type: v_measure value: 37.996685796529817 task: type: Clustering - dataset: config: default name: MTEB TenKGnadClusteringS2S revision: 6cddbe003f12b9b140aec477b583ac4191f01786 split: test type: slvnwhrl/tenkgnad-clustering-s2s metrics: - type: v_measure value: 23.71145428041516 task: type: Clustering - dataset: config: default name: MTEB FalseFriendsGermanEnglish revision: 15d6c030d3336cbb09de97b2cefc46db93262d40 split: test type: aari1995/false_friends_de_en_mteb metrics: - type: ap value: 71.22096746794873 task: type: PairClassification - dataset: config: default name: MTEB GermanSTSBenchmark revision: e36907544d44c3a247898ed81540310442329e20 split: test type: jinaai/german-STSbenchmark metrics: - type: cos_sim_spearman value: 84.67698604065061 task: type: STS - dataset: config: default name: MTEB GermanSTSBenchmark revision: e36907544d44c3a247898ed81540310442329e20 split: validation type: jinaai/german-STSbenchmark metrics: - type: cos_sim_spearman value: 88.048957974805 task: type: STS --- # German_Semantic_STS_V2 **Note:** Check out my new, updated models: [German_Semantic_V3](https://huggingface.co/aari1995/German_Semantic_V3) and [V3b](https://huggingface.co/aari1995/German_Semantic_V3b)! This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. Special thanks to [deepset](https://huggingface.co/deepset/) for providing the model gBERT-large and also to [Philip May](https://huggingface.co/philipMay) for the Translation of the dataset and chats about the topic. Model score after fine-tuning scores best, compared to these models: | Model Name | Spearman | |---------------------------------------------------------------|-------------------| | xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | | [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | | xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | | [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | | [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | | [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8355 | | [T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer) | 0.8550 | | **aari1995/German_Semantic_STS_V2** | **0.8626** | <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aari1995/German_Semantic_STS_V2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aari1995/German_Semantic_STS_V2') model = AutoModel.from_pretrained('aari1995/German_Semantic_STS_V2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1438 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 5e-06 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 576, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> The base model is trained by deepset. The dataset was published / translated by Philip May. The model was fine-tuned by Aaron Chibb.
cross-encoder/stsb-roberta-base
cross-encoder
"2021-08-05T08:41:58Z"
50,798
3
transformers
[ "transformers", "pytorch", "jax", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset](http://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences. ## Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')]) ``` The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
RichardErkhov/meta-llama_-_Meta-Llama-3-70B-Instruct-gguf
RichardErkhov
"2024-06-23T11:46:39Z"
50,787
0
null
[ "gguf", "region:us" ]
null
"2024-06-22T06:04:55Z"
Entry not found
CreativeLang/metaphor_detection_roberta_seq
CreativeLang
"2023-09-25T22:08:17Z"
50,699
2
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "en", "dataset:CreativeLang/vua20_metaphor", "license:cc-by-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-07-08T16:37:53Z"
--- license: cc-by-2.0 datasets: - CreativeLang/vua20_metaphor language: - en --- # Metaphor_Detection_Roberta_Seq ## Description - **Paper:** [FrameBERT: Conceptual Metaphor Detection with Frame Embedding Learning](https://aclanthology.org/2023.eacl-main.114.pdf) ## Model Summary Creative Language Toolkit (CLTK) Metadata - CL Type: Metaphor - Task Type: detection - Size: roberta-base (500MB) - Created time: 2022 This model is a easy to use metaphor detection baseline realised with `roberta-base` fine-tuned on [CreativeLang/vua20_metaphor](https://huggingface.co/datasets/CreativeLang/vua20_metaphor) dataset. To use this model, please use the `inference.py` in the [FrameBERT repo](https://github.com/liyucheng09/MetaphorFrame). Just run: ``` python inference.py CreativeLang/metaphor_detection_roberta_seq ``` Check out `inference.py` to learn how to apply the model on your own data. For the details of this model and the dataset used, we refer you to the release [paper](https://aclanthology.org/2023.eacl-main.114.pdf). ## Metrics | Metric | Value | |----------------------------------|--------------------------| | eval_loss | 0.2656 | | eval_accuracy_score | 0.9142 | | eval_precision | 0.9142 | | eval_recall | 0.9142 | | eval_f1 | 0.9142 | | eval_f1_macro | 0.7315 | | eval_runtime | 8.9802 | | eval_samples_per_second | 411.7960 | | eval_steps_per_second | 51.5580 | | epoch | 3.0000 | ### Citation Information If you find this dataset helpful, please cite: ``` @article{Li2023FrameBERTCM, title={FrameBERT: Conceptual Metaphor Detection with Frame Embedding Learning}, author={Yucheng Li and Shunyu Wang and Chenghua Lin and Frank Guerin and Lo{\"i}c Barrault}, journal={ArXiv}, year={2023}, volume={abs/2302.04834} } ``` ### Contributions If you have any queries, please open an issue or direct your queries to [mail](mailto:yucheng.li@surrey.ac.uk).
internlm/internlm2-20b
internlm
"2024-07-02T12:27:33Z"
50,690
51
transformers
[ "transformers", "pytorch", "internlm2", "text-generation", "custom_code", "arxiv:2403.17297", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-12T06:19:12Z"
--- pipeline_tag: text-generation license: other --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297) </div> ## Introduction The second generation of the InternLM model, InternLM2, includes models at two scales: 7B and 20B. For the convenience of users and researchers, we have open-sourced four versions of each scale of the model, which are: - internlm2-base: A high-quality and highly adaptable model base, serving as an excellent starting point for deep domain adaptation. - internlm2 (**recommended**): Built upon the internlm2-base, this version has further pretrained on domain-specific corpus. It shows outstanding performance in evaluations while maintaining robust general language abilities, making it our recommended choice for most applications. - internlm2-chat-sft: Based on the Base model, it undergoes supervised human alignment training. - internlm2-chat (**recommended**): Optimized for conversational interaction on top of the internlm2-chat-sft through RLHF, it excels in instruction adherence, empathetic chatting, and tool invocation. The base model of InternLM2 has the following technical features: - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval. - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding. ## InternLM2-20B ### Performance Evaluation We have evaluated InternLM2 on several important benchmarks using the open-source evaluation tool [OpenCompass](https://github.com/open-compass/opencompass). Some of the evaluation results are shown in the table below. You are welcome to visit the [OpenCompass Leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results. | Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass). - The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass). **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ### Import from Transformers To load the InternLM2-20B model using Transformers, use the following code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-20b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() inputs = tokenizer(["A beautiful flower"], return_tensors="pt") for k,v in inputs.items(): inputs[k] = v.cuda() gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.0} output = model.generate(**inputs, **gen_kwargs) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) # A beautiful flower with a long history of use in Ayurveda and traditional Chinese medicine. Known for its ability to help the body adapt to stress, it is a calming and soothing herb. It is used for its ability to help promote healthy sleep patterns, calm the nervous system and to help the body adapt to stress. It is also used for its ability to help the body deal with the symptoms of anxiety and depression. It is also used for its ability to help the body adapt to stress. It is also used for its ability to help the body adapt to stress. It is also used for its ability to help the ``` ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <internlm@pjlab.org.cn>. ## Citation ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## 简介 第二代浦语模型, InternLM2 包含 7B 和 20B 两个量级的模型。为了方便用户使用和研究,每个量级的模型我们总共开源了四个版本的模型,他们分别是 - internlm2-base: 高质量和具有很强可塑性的模型基座,是模型进行深度领域适配的高质量起点; - internlm2(**推荐**): 在internlm2-base基础上,进一步在特定领域的语料上进行预训练,在评测中成绩优异,同时保持了很好的通用语言能力,是我们推荐的在大部分应用中考虑选用的优秀基座; - internlm2-chat-sft:在Base基础上,进行有监督的人类对齐训练; - internlm2-chat(**推荐**):在internlm2-chat-sft基础上,经过RLHF,面向对话交互进行了优化,具有很好的指令遵循、共情聊天和调用工具等的能力。 InternLM2 的基础模型具备以下的技术特点 - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。 ## InternLM2-20B ### 性能评测 我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 对 InternLM2 在几个重要的评测集进行了评测 ,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。 | 评测集 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 | | --- | --- | --- | --- | --- | --- | --- | | MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 | | AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 | | BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 | | GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 | | MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 | | HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 | | MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 | - 以上评测结果基于 [OpenCompass](https://github.com/open-compass/opencompass) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/open-compass/opencompass) 中提供的配置文件。 - 评测数据会因 [OpenCompass](https://github.com/open-compass/opencompass) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/open-compass/opencompass) 最新版的评测结果为主。 **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ### 通过 Transformers 加载 通过以下的代码加载 InternLM2-20B 模型进行文本续写 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-20b", trust_remote_code=True) # `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,有可能导致显存不足 model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda() model = model.eval() inputs = tokenizer(["来到美丽的大自然"], return_tensors="pt") for k,v in inputs.items(): inputs[k] = v.cuda() gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.0} output = model.generate(**inputs, **gen_kwargs) output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) print(output) # 来到美丽的大自然,我们欣赏着大自然的美丽风景,感受着大自然的气息。 # 今天,我来到了美丽的龙湾公园,这里风景秀丽,山清水秀,鸟语花香。一走进公园,我就被眼前的景象惊呆了:绿油油的草坪上,五颜六色的花朵竞相开放,散发出阵阵清香。微风吹来,花儿随风摆动,好像在向我们点头微笑。远处,巍峨的大山连绵起伏,好像一条巨龙在空中飞舞。山下,一条清澈的小河静静地流淌着,河里的鱼儿自由自在地 ``` ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <internlm@pjlab.org.cn>。 ## 引用 ``` @misc{cai2024internlm2, title={InternLM2 Technical Report}, author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin}, year={2024}, eprint={2403.17297}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pysentimiento/bert-pt-emotion
pysentimiento
"2023-09-08T14:49:36Z"
50,643
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-09-08T14:48:30Z"
Entry not found
mradermacher/Yi-34B-GGUF
mradermacher
"2024-06-26T17:01:06Z"
50,630
0
transformers
[ "transformers", "gguf", "en", "base_model:01-ai/Yi-34B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T14:48:14Z"
--- base_model: 01-ai/Yi-34B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/01-ai/Yi-34B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-34B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34B-GGUF/resolve/main/Yi-34B.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
deutsche-telekom/gbert-large-paraphrase-cosine
deutsche-telekom
"2023-11-10T20:25:58Z"
50,508
21
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "setfit", "de", "dataset:deutsche-telekom/ger-backtrans-paraphrase", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-01-13T10:29:27Z"
--- pipeline_tag: sentence-similarity language: - de tags: - sentence-transformers - sentence-similarity - transformers - setfit license: mit datasets: - deutsche-telekom/ger-backtrans-paraphrase --- # German BERT large paraphrase cosine This is a [sentence-transformers](https://www.SBERT.net) model. It maps sentences & paragraphs (text) into a 1024 dimensional dense vector space. The model is intended to be used together with [SetFit](https://github.com/huggingface/setfit) to improve German few-shot text classification. It has a sibling model called [deutsche-telekom/gbert-large-paraphrase-euclidean](https://huggingface.co/deutsche-telekom/gbert-large-paraphrase-euclidean). This model is based on [deepset/gbert-large](https://huggingface.co/deepset/gbert-large). Many thanks to [deepset](https://www.deepset.ai/)! **Loss Function**\ We have used [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) with cosine similarity as the loss function. **Training Data**\ The model is trained on a carefully filtered dataset of [deutsche-telekom/ger-backtrans-paraphrase](https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase). We deleted the following pairs of sentences: - `min_char_len` less than 15 - `jaccard_similarity` greater than 0.3 - `de_token_count` greater than 30 - `en_de_token_count` greater than 30 - `cos_sim` less than 0.85 **Hyperparameters** - learning_rate: 8.345726930229726e-06 - num_epochs: 7 - train_batch_size: 57 - num_gpu: 1 ## Evaluation Results We use the [NLU Few-shot Benchmark - English and German](https://huggingface.co/datasets/deutsche-telekom/NLU-few-shot-benchmark-en-de) dataset to evaluate this model in a German few-shot scenario. **Qualitative results** - multilingual sentence embeddings provide the worst results - Electra models also deliver poor results - German BERT base size model ([deepset/gbert-base](https://huggingface.co/deepset/gbert-base)) provides good results - German BERT large size model ([deepset/gbert-large](https://huggingface.co/deepset/gbert-large)) provides very good results - our fine-tuned models (this model and [deutsche-telekom/gbert-large-paraphrase-euclidean](https://huggingface.co/deutsche-telekom/gbert-large-paraphrase-euclidean)) provide best results ## Licensing Copyright (c) 2023 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/)\ Copyright (c) 2022 [deepset GmbH](https://www.deepset.ai/) Licensed under the **MIT License** (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/deutsche-telekom/gbert-large-paraphrase-cosine/blob/main/LICENSE) in the repository.
kwoncho/losscut_news_pre2019_2
kwoncho
"2024-06-05T04:31:44Z"
50,467
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-05T04:30:46Z"
Entry not found
Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12
Lin-Chen
"2024-03-27T14:40:37Z"
50,361
17
transformers
[ "transformers", "pytorch", "image-feature-extraction", "arxiv:2311.12793", "region:us" ]
image-feature-extraction
"2023-11-21T13:06:12Z"
--- inference: false pipeline_tag: image-feature-extraction --- <br> <br> # ShareGPT4V Model Card ## Model details **Model type:** This is the vision tower of ShareGPT4V-7B fine-tuned with our [ShareGPT4V dataset](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V). **Model date:** This vision tower was trained in Nov 2023. **Paper or resources for more information:** [[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)] ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## Intended use **Primary intended uses:** The primary use of this vision tower is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 1.2M high-quality image-text pairs