modelId
stringlengths
4
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
387M
likes
int64
0
6.55k
library_name
stringclasses
368 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
1M
wildbro/332221
wildbro
"2024-06-29T14:43:42Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T14:43:42Z"
Entry not found
EuphoriaReccords/Jhope_TITAN40K
EuphoriaReccords
"2024-06-29T19:29:39Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-29T14:44:51Z"
--- license: mit ---
abhayesian/LLama3_HarmBench_LAT_7
abhayesian
"2024-06-29T15:21:36Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T14:46:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
monbear/qwen1half-7b-chat-lora
monbear
"2024-06-29T14:50:38Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-29T14:50:38Z"
--- license: apache-2.0 ---
Overflow64/Overflow
Overflow64
"2024-06-29T14:57:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T14:57:57Z"
Entry not found
netmouse/Llama-3-Taiwan-8B-Instruct-finetuning-by-promisedchat
netmouse
"2024-06-30T14:22:00Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:yentinglin/Llama-3-Taiwan-8B-Instruct", "base_model:finetune:yentinglin/Llama-3-Taiwan-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T14:59:51Z"
--- base_model: yentinglin/Llama-3-Taiwan-8B-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** netmouse - **License:** apache-2.0 - **Finetuned from model :** yentinglin/Llama-3-Taiwan-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PrunaAI/victunes-TherapyBeagle-11B-v1-QUANTO-int2bit-smashed
PrunaAI
"2024-07-19T09:24:51Z"
0
0
transformers
[ "transformers", "pruna-ai", "base_model:victunes/TherapyBeagle-11B-v1", "base_model:finetune:victunes/TherapyBeagle-11B-v1", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:03:47Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: victunes/TherapyBeagle-11B-v1 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with quanto. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo victunes/TherapyBeagle-11B-v1 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install quanto ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer IMPORTS model = AutoModelForCausalLM.from_pretrained("PrunaAI/victunes-TherapyBeagle-11B-v1-QUANTO-int2bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("victunes/TherapyBeagle-11B-v1") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model victunes/TherapyBeagle-11B-v1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/victunes-TherapyBeagle-11B-v1-QUANTO-int4bit-smashed
PrunaAI
"2024-07-19T09:23:39Z"
0
0
transformers
[ "transformers", "pruna-ai", "base_model:victunes/TherapyBeagle-11B-v1", "base_model:finetune:victunes/TherapyBeagle-11B-v1", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:04:07Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: victunes/TherapyBeagle-11B-v1 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with quanto. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo victunes/TherapyBeagle-11B-v1 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install quanto ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer IMPORTS model = AutoModelForCausalLM.from_pretrained("PrunaAI/victunes-TherapyBeagle-11B-v1-QUANTO-int4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("victunes/TherapyBeagle-11B-v1") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model victunes/TherapyBeagle-11B-v1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Yjhhh/optimizedtf5yfygd-model
Yjhhh
"2024-06-29T15:05:08Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:05:08Z"
Entry not found
habulaj/5336741157
habulaj
"2024-06-29T15:07:05Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:07:03Z"
Entry not found
geraldabrhm/llama-3-8b-seqclass-antonym-lr2_5-batch16-lora32
geraldabrhm
"2024-06-29T16:58:13Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-06-29T15:10:21Z"
Entry not found
genaitiwari/Mymodel
genaitiwari
"2024-06-29T15:12:54Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:12:54Z"
Entry not found
Yjhhh/optimized-model
Yjhhh
"2024-06-29T15:13:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:13:17Z"
Entry not found
kfmix/pub-kfmodel
kfmix
"2024-06-29T15:17:06Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:15:31Z"
Entry not found
habulaj/8901365593
habulaj
"2024-06-29T15:18:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:18:55Z"
Entry not found
habulaj/1117723644
habulaj
"2024-06-29T15:23:24Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:23:22Z"
Entry not found
NobodySpecial/Qwen2-72B-base-exl2
NobodySpecial
"2024-06-30T01:50:00Z"
0
0
null
[ "pretrained", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-06-29T15:23:52Z"
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Qwen2-72B ## Branch Names bpw = Bits per Weight hb = Bits for the lm_head layer ## Quantization Details Quantized via Exllamav2 Version: 0.1.6 **All versions in this repo were quantized with the setting Rope Scale=4** ## Original Model Card ### Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 72B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ### Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ### Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ### Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval (5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-72B performance | Datasets | DeepSeek-V2 | Mixtral-8x22B | Llama-3-70B | Qwen1.5-72B | Qwen1.5-110B | **Qwen2-72B** | | :--------| :---------: | :------------: | :------------: | :------------: | :------------: |:------------: | |Architecture | MoE | MoE | Dense | Dense | Dense | Dense | |#Activated Params | 21B | 39B | 70B | 72B | 110B | 72B | |#Params | 236B | 140B | 70B | 72B | 110B | 72B| | ***English*** | | | | | | | |MMLU |78.5 | 77.8 | 79.5 | 77.5 | 80.4 | **84.2** | |MMLU-Pro | - | 49.5 | 52.8 | 45.8 | 49.4 | **55.6** | |GPQA | -| 34.3 | 36.3 | 36.3 | 35.9 | **37.9** | |Theorem QA | -| 35.9 | 32.3 | 29.3 | 34.9 | **43.1** | |BBH | 78.9 |78.9 | 81.0 | 65.5 | 74.8 | **82.4** | |HellaSwag | 87.8 | **88.7** | 88.0 | 86.0 | 87.5 | 87.6 | |WindoGrande | 84.8|85.0 | **85.3** | 83.0 | 83.5 | 85.1 | |ARC-C | 70.0| **70.7** | 68.8 | 65.9 | 69.6 | 68.9 | |TruthfulQA | 42.2 | 51.0 | 45.6 | **59.6** | 49.6 | 54.8 | | ***Coding*** | | | | | | | |HumanEval | 45.7 | 46.3 | 48.2 | 46.3 | 54.3 | **64.6** | |MBPP |73.9 | 71.7 | 70.4 | 66.9 | 70.9 | **76.9** | |EvalPlus | 55.0 | 54.1 | 54.8 | 52.9 | 57.7 | **65.4** | |MultiPL-E |44.4 | 46.7 | 46.3 | 41.8 | 52.7 | **59.6** | | ***Mathematics*** | | | | | | | |GSM8K | 79.2 | 83.7 | 83.0 | 79.5 | 85.4 | **89.5** | |MATH | 43.6 | 41.7 | 42.5 | 34.1 | 49.6 | **51.1** | | ***Chinese*** | | | | | | | |C-Eval | 81.7 | 54.6 | 65.2 | 84.1 | 89.1 | **91.0** | |CMMLU | 84.0 | 53.4 | 67.2 | 83.5 | 88.3 | **90.1** | | ***Multilingual*** | | | | | | | |Mulit-Exam | 67.5 | 63.5 | 70.0 | 66.4 | 75.6 | **76.6** | |Multi-Understanding | 77.0 | 77.7 | 79.9 | 78.2 | 78.2 | **80.7** | |Multi-Mathematics | 58.8 | 62.9 | 67.1 | 61.7 | 64.4 | **76.0** | |Multi-Translation | 36.0 | 23.3 | **38.0** | 35.6 | 36.2 | 37.8 | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
ningj2413/8b_llama3_what_4bitq_r16
ningj2413
"2024-06-29T15:26:57Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:25:12Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ningj2413 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit - r=16;lora-alpha=16 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sal55/sss
sal55
"2024-06-29T21:52:02Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:27:06Z"
Entry not found
bartowski/Fook-Yi-34B-32K-v1-exl2
bartowski
"2024-06-29T15:28:37Z"
0
0
null
[ "not-for-all-audiences", "text-generation", "license:cc-by-nc-4.0", "region:us" ]
text-generation
"2024-06-29T15:28:36Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Fook-Yi-34B-32K-v1 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.1.6">turboderp's ExLlamaV2 v0.1.6</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/TheDrummer/Fook-Yi-34B-32K-v1 ## Prompt format ``` <|im_start|> system {system_prompt}<|im_end|> <|im_start|> user {prompt}<|im_end|> <|im_start|> assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ------ | ---- | ------------ | ---- | ---- | ---- | ----------- | | [8_0](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/8_0) | 8.0 | 8.0 | 34.9 GB | 37.6 GB | 41.6 GB | Max quality producable by ExLlamav2, generally unneeded but maximum performance | | [6_5](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/6_5) | 6.5 | 8.0 | 28.9 GB | 31.6 GB | 35.6 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/5_0) | 5.0 | 8.0 | 22.6 GB | 25.3 GB | 29.3 GB | Very high quality, usable at 4k context on 24GB. | | [4_25](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/4_25) | 4.25 | 6.0 | 19.5 GB | 22.2 GB | 26.2 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/3_5) | 3.5 | 6.0 | 16.5 GB | 19.2 GB | 23.2 GB | Lower quality, only use if you have to. | | [3_0](https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2/tree/3_0) | 3.0 | 6.0 | 14.3 GB | 17.0 GB | 21.0 GB | Very low quality, usable with 16gb of VRAM. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Fook-Yi-34B-32K-v1-exl2 Fook-Yi-34B-32K-v1-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Fook-Yi-34B-32K-v1-exl2 --revision 6_5 --local-dir Fook-Yi-34B-32K-v1-exl2-6_5 ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Fook-Yi-34B-32K-v1-exl2 --revision 6_5 --local-dir Fook-Yi-34B-32K-v1-exl2-6.5 ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
RobertML/sn21-2hydrogen-oxygen
RobertML
"2024-06-29T15:34:54Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2024-06-29T15:28:54Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Hannibal046/gtr_t5_nq_32_stage2
Hannibal046
"2024-06-29T15:36:12Z"
0
1
transformers
[ "transformers", "safetensors", "t5", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:31:37Z"
--- license: mit ---
habulaj/4916239000
habulaj
"2024-06-29T15:32:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:32:41Z"
Entry not found
habulaj/314350281005
habulaj
"2024-06-29T15:35:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:34:59Z"
Entry not found
aleatorydialogue/ad_backgrounds
aleatorydialogue
"2024-07-29T15:11:28Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-29T15:40:39Z"
--- license: apache-2.0 ---
Horizon6957/Medical-chatbot
Horizon6957
"2024-06-29T15:41:27Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-29T15:41:27Z"
--- license: mit ---
notBeastKing/Llama3_sentiment_analysis
notBeastKing
"2024-06-29T17:09:01Z"
0
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
"2024-06-29T15:42:31Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
jtatman/pythia-delphi-suboptimal-roleplay
jtatman
"2024-06-29T15:44:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:44:27Z"
Entry not found
habulaj/52006291
habulaj
"2024-06-29T15:45:13Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:45:08Z"
Entry not found
PlasmicZ/gemma-2b-5e-r256-new
PlasmicZ
"2024-06-29T15:48:38Z"
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-29T15:46:31Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
habulaj/498242887
habulaj
"2024-06-29T15:47:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T15:47:09Z"
Entry not found
ningj2413/8b_llama3_what_4bitq_r32
ningj2413
"2024-06-29T15:47:47Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:47:39Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ningj2413 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kishorea/p_l3
kishorea
"2024-06-29T15:53:20Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:53:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ningj2413/8b_llama3_what_4bitq_instruct_r32
ningj2413
"2024-06-29T15:57:01Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T15:56:54Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ningj2413 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AbdullahMashhadi/Elon-Musk-talking-model
AbdullahMashhadi
"2024-06-29T16:03:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:02:58Z"
Entry not found
habulaj/9419281124
habulaj
"2024-06-29T16:03:18Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:03:16Z"
Entry not found
hasanbahadir/ihale_dataset
hasanbahadir
"2024-06-30T13:12:47Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:04:56Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
helling100/my_awesome_model
helling100
"2024-06-29T16:10:00Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:10:00Z"
Entry not found
jdmccaffrey/my_awesome_billsum_model
jdmccaffrey
"2024-06-29T16:10:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:10:04Z"
Entry not found
Sharan1712/llama2_7B_unnaturalcore_qia3_4bit_8a
Sharan1712
"2024-06-29T16:10:27Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:10:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hosdf/bgtjk
hosdf
"2024-06-29T16:46:44Z"
0
1
null
[ "license:openrail", "region:us" ]
null
"2024-06-29T16:15:49Z"
--- license: openrail ---
habulaj/2483324556
habulaj
"2024-06-29T16:16:01Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:15:56Z"
Entry not found
dash8060/EJ
dash8060
"2024-06-29T16:26:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:19:51Z"
Entry not found
PrunaAI/nota-ai-cpt_st-vicuna-v1.3-3.7b-ppl-QUANTO-int4bit-smashed
PrunaAI
"2024-07-19T09:28:53Z"
0
0
transformers
[ "transformers", "pruna-ai", "base_model:nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl", "base_model:finetune:nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:23:11Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with quanto. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install quanto ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer IMPORTS model = AutoModelForCausalLM.from_pretrained("PrunaAI/nota-ai-cpt_st-vicuna-v1.3-3.7b-ppl-QUANTO-int4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
dash8060/dks
dash8060
"2024-06-29T16:41:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:40:51Z"
Entry not found
AiHubber/HornhubSound
AiHubber
"2024-06-29T16:43:25Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-29T16:42:15Z"
--- license: openrail ---
SDH2222/gas
SDH2222
"2024-06-29T16:43:32Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:42:45Z"
Entry not found
Neha13/AIcontentdetector
Neha13
"2024-06-29T16:43:45Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:43:45Z"
Entry not found
D3MI4N/q-FrozenLake-v1-4x4-noSlippery
D3MI4N
"2024-06-29T16:44:47Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-29T16:44:44Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="D3MI4N/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
D3MI4N/taxi-v3-dem
D3MI4N
"2024-06-29T16:47:24Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-29T16:47:23Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3-dem results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="D3MI4N/taxi-v3-dem", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rashid996958/pix2pix_exp39
rashid996958
"2024-06-29T16:48:20Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:48:14Z"
Entry not found
Neha13/aidetectorintext
Neha13
"2024-06-29T16:48:15Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:48:15Z"
Entry not found
ningj2413/8b_llama3_what_4bitq_instruct_r64
ningj2413
"2024-06-29T16:51:17Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:51:03Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ningj2413 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
prometheus04/Llama2_Finetuned
prometheus04
"2024-06-29T16:52:57Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:52:53Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ningj2413/8b_llama3_what_4bitq_instruct_r128
ningj2413
"2024-06-29T16:55:28Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T16:54:54Z"
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ningj2413 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hexapoly/Norm
Hexapoly
"2024-06-29T16:57:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T16:55:24Z"
Entry not found
mjfan1999/BlakeShelton2011
mjfan1999
"2024-06-29T17:12:20Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-29T16:55:27Z"
--- license: unknown ---
habulaj/163166141251
habulaj
"2024-06-29T17:01:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:01:53Z"
Entry not found
rashid996958/pix2pix_exp41
rashid996958
"2024-06-29T17:03:20Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:03:15Z"
Entry not found
SDH2222/ppp
SDH2222
"2024-07-14T20:52:21Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:03:34Z"
Entry not found
chaanks/DNSMOS
chaanks
"2024-06-29T17:12:09Z"
0
0
null
[ "onnx", "region:us" ]
null
"2024-06-29T17:11:30Z"
Entry not found
Neha13/x
Neha13
"2024-06-29T17:13:04Z"
0
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2024-06-29T17:12:52Z"
Entry not found
imgeorgiev/pwm
imgeorgiev
"2024-07-02T20:36:12Z"
0
2
null
[ "license:mit", "region:us" ]
null
"2024-06-29T17:13:21Z"
--- license: mit --- # PWM: Policy Learning with Large World Models [Ignat Georgiev](https://www.imgeorgiev.com/), [Varun Giridhar](https://www.linkedin.com/in/varun-giridhar-463947146/), [Nicklas Hansen](https://www.nicklashansen.com/), [Animesh Garg](https://animesh.garg.tech/) [Project website](http://imgeorgiev.com/pwm) [Paper](TODO) [Models & Datasets](https://huggingface.co/imgeorgiev/pwm) ## Overview ![](https://github.com/imgeorgiev/pwm/figures/teaser.png) Instead of building world models into algorithms, we propose using large-scale multi-task world models as differentiable simulators for policy learning. When well-regularized, these models enable efficient policy learning with first-order gradient optimization. This allows PWM to learn to solve 80 tasks in < 10 minutes each without the need for expensive online planning. ## Structure of repository ``` pwm ├── dflex │ ├── data - data used for dflex world model pre-training │ └── pretrained - already trained world models that can be used in dflex experiments ├── multitask - pre-trained world models for multitask evaluation ├── pedagogical - pre-trained world models for recreating pedagogical examples └── README.md ```
iamalexcaspian/GumballWatterson-TAWOG-LoganGrove
iamalexcaspian
"2024-06-29T18:44:23Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:15:23Z"
Entry not found
youssouf128/youssouf
youssouf128
"2024-06-29T17:23:04Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:15:59Z"
Entry not found
mjfan1999/BlakeShelton2016
mjfan1999
"2024-06-29T17:33:15Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-29T17:22:41Z"
--- license: unknown ---
random2344/Graphic
random2344
"2024-06-29T17:24:11Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:23:31Z"
Entry not found
mharb/Reinforce-cartpole-v1
mharb
"2024-06-29T17:23:55Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-29T17:23:44Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
itay-nakash/model_73a455d87c_sweep_divine-firefly-981
itay-nakash
"2024-06-29T17:26:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:26:58Z"
Entry not found
Malkovitz/Miroslaw_Utta
Malkovitz
"2024-06-29T17:32:45Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-06-29T17:29:56Z"
--- license: unknown ---
NoNameFactory/llama-3-8b-it-4bit-ContdPT_2_10
NoNameFactory
"2024-06-29T17:34:03Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T17:31:49Z"
--- base_model: unsloth/llama-3-8b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** NoNameFactory - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kandarp0809/Grid
kandarp0809
"2024-06-29T17:36:10Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:34:40Z"
Entry not found
hadestest999/testersssss
hadestest999
"2024-06-29T17:35:40Z"
0
0
null
[ "license:cdla-permissive-2.0", "region:us" ]
null
"2024-06-29T17:34:55Z"
--- license: cdla-permissive-2.0 ---
yuvrajAI/gemma-2b-medical
yuvrajAI
"2024-06-29T17:36:06Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T17:35:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
impossibleexchange/waitingonweights
impossibleexchange
"2024-07-02T00:12:51Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2024-06-29T17:36:13Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mgh6/Pair_Fold_CNN
mgh6
"2024-07-01T17:24:56Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-29T17:38:23Z"
Entry not found
youssouf128/ABAYAZID
youssouf128
"2024-06-29T17:41:47Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:41:47Z"
Entry not found
AdamKasumovic/llama3-70b-instruct-ids-winogrande-train-s-af-winogrande-med
AdamKasumovic
"2024-06-29T17:48:33Z"
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-70b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-70b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T17:48:33Z"
--- base_model: unsloth/llama-3-70b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** AdamKasumovic - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-70b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morteza76/seta
morteza76
"2024-06-29T17:48:53Z"
0
0
null
[ "license:other", "region:us" ]
null
"2024-06-29T17:48:53Z"
--- license: other license_name: seta license_link: LICENSE ---
Soorya1998/taxi-v1
Soorya1998
"2024-06-29T17:49:40Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-29T17:49:13Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.70 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Soorya1998/taxi-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
habulaj/295100262864
habulaj
"2024-06-29T17:49:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:49:35Z"
Entry not found
devilga/dac_16khZ_8kbps
devilga
"2024-06-29T17:50:49Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T17:50:49Z"
Entry not found
cmmann/ppo-lunarlander-v2
cmmann
"2024-06-29T18:09:03Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-06-29T18:08:46Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -175.86 +/- 159.83 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Nikt271/Kurak
Nikt271
"2024-06-29T18:21:25Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:17:54Z"
Entry not found
dat-lequoc/A100_trt-llm_engine
dat-lequoc
"2024-06-29T18:20:10Z"
0
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2024-06-29T18:18:05Z"
Entry not found
habulaj/7860557029
habulaj
"2024-06-29T18:20:12Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:20:05Z"
Entry not found
random2344/minimal1
random2344
"2024-06-29T18:22:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:21:40Z"
Entry not found
tdooms/ts-medium-relu
tdooms
"2024-06-29T18:23:16Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-29T18:22:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Akirami/phi-3-medium_text2cypher_recommendations
Akirami
"2024-06-29T18:27:07Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T18:26:48Z"
--- base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** Akirami - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Frixi/Randy_NotaLoka
Frixi
"2024-06-29T18:35:37Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-29T18:30:08Z"
--- license: openrail ---
bradpacito/brapacito
bradpacito
"2024-06-29T18:36:31Z"
0
0
null
[ "license:wtfpl", "region:us" ]
null
"2024-06-29T18:36:31Z"
--- license: wtfpl ---
itay-nakash/model_73a455d87c_sweep_dry-wind-982
itay-nakash
"2024-06-29T18:38:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:38:51Z"
Entry not found
micdoh/2024_JOCN_XLRON
micdoh
"2024-06-29T18:51:12Z"
0
0
null
[ "doi:10.57967/hf/2654", "license:mit", "region:us" ]
null
"2024-06-29T18:48:47Z"
--- license: mit ---
jacobcd52/mats-gpt2-saes
jacobcd52
"2024-07-06T15:28:51Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-29T18:55:39Z"
--- license: mit ---
RITTIHILATTI/gpt2
RITTIHILATTI
"2024-06-29T18:56:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:56:55Z"
Entry not found
Callyde/Li
Callyde
"2024-06-29T18:59:50Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T18:59:50Z"
Entry not found
ilya94prok/test
ilya94prok
"2024-06-29T19:01:32Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-29T19:01:32Z"
--- license: apache-2.0 ---
habulaj/375846341463
habulaj
"2024-06-29T19:01:57Z"
0
0
null
[ "region:us" ]
null
"2024-06-29T19:01:52Z"
Entry not found
samad321kk/ee
samad321kk
"2024-06-29T19:06:55Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-29T19:02:20Z"
--- license: openrail ---
Phoenix91/Phoenix
Phoenix91
"2024-06-29T19:02:44Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-29T19:02:44Z"
--- license: apache-2.0 ---